00:00:00.001 Started by upstream project "autotest-per-patch" build number 132753 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.077 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.078 The recommended git tool is: git 00:00:00.078 using credential 00000000-0000-0000-0000-000000000002 00:00:00.081 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.130 Fetching changes from the remote Git repository 00:00:00.132 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.191 Using shallow fetch with depth 1 00:00:00.191 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.191 > git --version # timeout=10 00:00:00.268 > git --version # 'git version 2.39.2' 00:00:00.268 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.315 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.315 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.089 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.106 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.120 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.120 > git config core.sparsecheckout # timeout=10 00:00:04.134 > git read-tree -mu HEAD # timeout=10 00:00:04.154 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.189 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.189 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.277 [Pipeline] Start of Pipeline 00:00:04.293 [Pipeline] library 00:00:04.295 Loading library shm_lib@master 00:00:04.295 Library shm_lib@master is cached. Copying from home. 00:00:04.313 [Pipeline] node 00:00:04.336 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.338 [Pipeline] { 00:00:04.347 [Pipeline] catchError 00:00:04.348 [Pipeline] { 00:00:04.361 [Pipeline] wrap 00:00:04.371 [Pipeline] { 00:00:04.377 [Pipeline] stage 00:00:04.379 [Pipeline] { (Prologue) 00:00:04.638 [Pipeline] sh 00:00:05.393 + logger -p user.info -t JENKINS-CI 00:00:05.425 [Pipeline] echo 00:00:05.426 Node: GP8 00:00:05.435 [Pipeline] sh 00:00:05.796 [Pipeline] setCustomBuildProperty 00:00:05.809 [Pipeline] echo 00:00:05.812 Cleanup processes 00:00:05.818 [Pipeline] sh 00:00:06.112 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.112 12348 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.125 [Pipeline] sh 00:00:06.420 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.420 ++ grep -v 'sudo pgrep' 00:00:06.420 ++ awk '{print $1}' 00:00:06.420 + sudo kill -9 00:00:06.420 + true 00:00:06.437 [Pipeline] cleanWs 00:00:06.448 [WS-CLEANUP] Deleting project workspace... 00:00:06.448 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.459 [WS-CLEANUP] done 00:00:06.462 [Pipeline] setCustomBuildProperty 00:00:06.475 [Pipeline] sh 00:00:06.761 + sudo git config --global --replace-all safe.directory '*' 00:00:06.919 [Pipeline] httpRequest 00:00:08.591 [Pipeline] echo 00:00:08.593 Sorcerer 10.211.164.101 is alive 00:00:08.602 [Pipeline] retry 00:00:08.603 [Pipeline] { 00:00:08.617 [Pipeline] httpRequest 00:00:08.622 HttpMethod: GET 00:00:08.622 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.624 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.633 Response Code: HTTP/1.1 200 OK 00:00:08.634 Success: Status code 200 is in the accepted range: 200,404 00:00:08.634 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:19.734 [Pipeline] } 00:00:19.750 [Pipeline] // retry 00:00:19.757 [Pipeline] sh 00:00:20.053 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:20.073 [Pipeline] httpRequest 00:00:20.481 [Pipeline] echo 00:00:20.483 Sorcerer 10.211.164.101 is alive 00:00:20.491 [Pipeline] retry 00:00:20.494 [Pipeline] { 00:00:20.508 [Pipeline] httpRequest 00:00:20.513 HttpMethod: GET 00:00:20.514 URL: http://10.211.164.101/packages/spdk_0787c2b4effc98fa5ff7cd6698b0cb6761e67340.tar.gz 00:00:20.515 Sending request to url: http://10.211.164.101/packages/spdk_0787c2b4effc98fa5ff7cd6698b0cb6761e67340.tar.gz 00:00:20.523 Response Code: HTTP/1.1 200 OK 00:00:20.524 Success: Status code 200 is in the accepted range: 200,404 00:00:20.524 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_0787c2b4effc98fa5ff7cd6698b0cb6761e67340.tar.gz 00:02:57.599 [Pipeline] } 00:02:57.613 [Pipeline] // retry 00:02:57.622 [Pipeline] sh 00:02:57.923 + tar --no-same-owner -xf spdk_0787c2b4effc98fa5ff7cd6698b0cb6761e67340.tar.gz 00:03:00.492 [Pipeline] sh 00:03:00.780 + git -C spdk log --oneline -n5 00:03:00.780 0787c2b4e accel/mlx5: Support mkey registration 00:03:00.780 0ea9ac02f accel/mlx5: Create pool of UMRs 00:03:00.780 60adca7e1 lib/mlx5: API to configure UMR 00:03:00.780 c2471e450 nvmf: Clean unassociated_qpairs on connect error 00:03:00.780 5469bd2d1 nvmf/rdma: Fix destroy of uninitialized qpair 00:03:00.790 [Pipeline] } 00:03:00.802 [Pipeline] // stage 00:03:00.811 [Pipeline] stage 00:03:00.813 [Pipeline] { (Prepare) 00:03:00.830 [Pipeline] writeFile 00:03:00.845 [Pipeline] sh 00:03:01.138 + logger -p user.info -t JENKINS-CI 00:03:01.152 [Pipeline] sh 00:03:01.444 + logger -p user.info -t JENKINS-CI 00:03:01.460 [Pipeline] sh 00:03:01.748 + cat autorun-spdk.conf 00:03:01.748 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:01.748 SPDK_TEST_NVMF=1 00:03:01.748 SPDK_TEST_NVME_CLI=1 00:03:01.748 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:01.748 SPDK_TEST_NVMF_NICS=e810 00:03:01.748 SPDK_TEST_VFIOUSER=1 00:03:01.748 SPDK_RUN_UBSAN=1 00:03:01.748 NET_TYPE=phy 00:03:01.757 RUN_NIGHTLY=0 00:03:01.762 [Pipeline] readFile 00:03:01.840 [Pipeline] withEnv 00:03:01.842 [Pipeline] { 00:03:01.857 [Pipeline] sh 00:03:02.153 + set -ex 00:03:02.153 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:02.153 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:02.153 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:02.153 ++ SPDK_TEST_NVMF=1 00:03:02.153 ++ SPDK_TEST_NVME_CLI=1 00:03:02.153 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:02.153 ++ SPDK_TEST_NVMF_NICS=e810 00:03:02.153 ++ SPDK_TEST_VFIOUSER=1 00:03:02.153 ++ SPDK_RUN_UBSAN=1 00:03:02.153 ++ NET_TYPE=phy 00:03:02.153 ++ RUN_NIGHTLY=0 00:03:02.153 + case $SPDK_TEST_NVMF_NICS in 00:03:02.153 + DRIVERS=ice 00:03:02.153 + [[ tcp == \r\d\m\a ]] 00:03:02.153 + [[ -n ice ]] 00:03:02.153 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:02.153 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:05.464 rmmod: ERROR: Module irdma is not currently loaded 00:03:05.464 rmmod: ERROR: Module i40iw is not currently loaded 00:03:05.464 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:05.464 + true 00:03:05.464 + for D in $DRIVERS 00:03:05.464 + sudo modprobe ice 00:03:05.464 + exit 0 00:03:05.476 [Pipeline] } 00:03:05.491 [Pipeline] // withEnv 00:03:05.496 [Pipeline] } 00:03:05.510 [Pipeline] // stage 00:03:05.520 [Pipeline] catchError 00:03:05.522 [Pipeline] { 00:03:05.536 [Pipeline] timeout 00:03:05.537 Timeout set to expire in 1 hr 0 min 00:03:05.539 [Pipeline] { 00:03:05.556 [Pipeline] stage 00:03:05.559 [Pipeline] { (Tests) 00:03:05.576 [Pipeline] sh 00:03:05.869 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:05.869 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:05.869 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:05.869 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:05.869 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:05.869 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:05.869 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:05.869 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:05.869 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:05.869 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:05.869 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:05.869 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:05.869 + source /etc/os-release 00:03:05.869 ++ NAME='Fedora Linux' 00:03:05.869 ++ VERSION='39 (Cloud Edition)' 00:03:05.869 ++ ID=fedora 00:03:05.869 ++ VERSION_ID=39 00:03:05.869 ++ VERSION_CODENAME= 00:03:05.869 ++ PLATFORM_ID=platform:f39 00:03:05.869 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:05.869 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:05.869 ++ LOGO=fedora-logo-icon 00:03:05.869 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:05.869 ++ HOME_URL=https://fedoraproject.org/ 00:03:05.869 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:05.869 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:05.869 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:05.869 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:05.869 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:05.869 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:05.869 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:05.869 ++ SUPPORT_END=2024-11-12 00:03:05.869 ++ VARIANT='Cloud Edition' 00:03:05.869 ++ VARIANT_ID=cloud 00:03:05.869 + uname -a 00:03:05.869 Linux spdk-gp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:05.869 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:06.809 Hugepages 00:03:06.809 node hugesize free / total 00:03:06.809 node0 1048576kB 0 / 0 00:03:06.809 node0 2048kB 0 / 0 00:03:06.809 node1 1048576kB 0 / 0 00:03:06.809 node1 2048kB 0 / 0 00:03:06.809 00:03:06.809 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:07.069 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:07.069 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:07.069 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:07.069 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:07.069 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:07.069 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:07.069 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:07.069 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:07.069 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:07.069 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:07.069 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:07.069 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:07.069 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:07.069 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:07.069 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:07.069 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:07.069 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:07.069 + rm -f /tmp/spdk-ld-path 00:03:07.069 + source autorun-spdk.conf 00:03:07.069 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:07.069 ++ SPDK_TEST_NVMF=1 00:03:07.069 ++ SPDK_TEST_NVME_CLI=1 00:03:07.069 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:07.069 ++ SPDK_TEST_NVMF_NICS=e810 00:03:07.069 ++ SPDK_TEST_VFIOUSER=1 00:03:07.069 ++ SPDK_RUN_UBSAN=1 00:03:07.069 ++ NET_TYPE=phy 00:03:07.069 ++ RUN_NIGHTLY=0 00:03:07.069 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:07.069 + [[ -n '' ]] 00:03:07.069 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:07.069 + for M in /var/spdk/build-*-manifest.txt 00:03:07.069 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:07.069 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:07.069 + for M in /var/spdk/build-*-manifest.txt 00:03:07.069 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:07.069 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:07.069 + for M in /var/spdk/build-*-manifest.txt 00:03:07.069 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:07.069 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:07.069 ++ uname 00:03:07.069 + [[ Linux == \L\i\n\u\x ]] 00:03:07.069 + sudo dmesg -T 00:03:07.069 + sudo dmesg --clear 00:03:07.069 + dmesg_pid=13635 00:03:07.069 + [[ Fedora Linux == FreeBSD ]] 00:03:07.069 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:07.069 + sudo dmesg -Tw 00:03:07.069 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:07.069 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:07.069 + [[ -x /usr/src/fio-static/fio ]] 00:03:07.069 + export FIO_BIN=/usr/src/fio-static/fio 00:03:07.069 + FIO_BIN=/usr/src/fio-static/fio 00:03:07.069 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:07.069 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:07.069 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:07.069 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:07.069 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:07.069 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:07.069 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:07.069 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:07.069 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:07.069 19:01:52 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:07.069 19:01:52 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:07.069 19:01:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:07.069 19:01:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:07.069 19:01:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:07.069 19:01:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:07.069 19:01:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:03:07.069 19:01:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:03:07.069 19:01:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:07.069 19:01:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:03:07.069 19:01:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:07.069 19:01:52 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:07.069 19:01:52 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:07.329 19:01:52 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:07.329 19:01:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:07.329 19:01:52 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:07.329 19:01:52 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:07.329 19:01:52 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:07.329 19:01:52 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:07.329 19:01:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.329 19:01:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.329 19:01:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.329 19:01:52 -- paths/export.sh@5 -- $ export PATH 00:03:07.329 19:01:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.329 19:01:52 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:07.329 19:01:52 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:07.329 19:01:52 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733508112.XXXXXX 00:03:07.329 19:01:52 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733508112.BZX0HD 00:03:07.329 19:01:52 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:07.329 19:01:52 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:07.329 19:01:52 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:07.329 19:01:52 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:07.329 19:01:52 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:07.329 19:01:52 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:07.329 19:01:52 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:07.329 19:01:52 -- common/autotest_common.sh@10 -- $ set +x 00:03:07.329 19:01:52 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:07.329 19:01:52 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:07.329 19:01:52 -- pm/common@17 -- $ local monitor 00:03:07.329 19:01:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.329 19:01:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.329 19:01:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.329 19:01:52 -- pm/common@21 -- $ date +%s 00:03:07.329 19:01:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.329 19:01:52 -- pm/common@21 -- $ date +%s 00:03:07.329 19:01:52 -- pm/common@25 -- $ sleep 1 00:03:07.329 19:01:52 -- pm/common@21 -- $ date +%s 00:03:07.329 19:01:52 -- pm/common@21 -- $ date +%s 00:03:07.330 19:01:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733508112 00:03:07.330 19:01:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733508112 00:03:07.330 19:01:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733508112 00:03:07.330 19:01:52 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733508112 00:03:07.330 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733508112_collect-vmstat.pm.log 00:03:07.330 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733508112_collect-cpu-load.pm.log 00:03:07.330 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733508112_collect-cpu-temp.pm.log 00:03:07.330 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733508112_collect-bmc-pm.bmc.pm.log 00:03:08.272 19:01:53 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:08.272 19:01:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:08.272 19:01:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:08.272 19:01:53 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:08.272 19:01:53 -- spdk/autobuild.sh@16 -- $ date -u 00:03:08.272 Fri Dec 6 06:01:53 PM UTC 2024 00:03:08.272 19:01:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:08.272 v25.01-pre-309-g0787c2b4e 00:03:08.272 19:01:53 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:08.272 19:01:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:08.272 19:01:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:08.272 19:01:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:08.272 19:01:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:08.272 19:01:53 -- common/autotest_common.sh@10 -- $ set +x 00:03:08.272 ************************************ 00:03:08.272 START TEST ubsan 00:03:08.272 ************************************ 00:03:08.272 19:01:53 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:08.272 using ubsan 00:03:08.272 00:03:08.272 real 0m0.000s 00:03:08.272 user 0m0.000s 00:03:08.272 sys 0m0.000s 00:03:08.272 19:01:53 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:08.272 19:01:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:08.272 ************************************ 00:03:08.272 END TEST ubsan 00:03:08.272 ************************************ 00:03:08.272 19:01:53 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:08.272 19:01:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:08.272 19:01:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:08.272 19:01:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:08.272 19:01:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:08.272 19:01:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:08.272 19:01:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:08.272 19:01:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:08.272 19:01:53 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:08.841 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:08.841 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:09.779 Using 'verbs' RDMA provider 00:03:22.943 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:32.949 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:32.949 Creating mk/config.mk...done. 00:03:33.209 Creating mk/cc.flags.mk...done. 00:03:33.209 Type 'make' to build. 00:03:33.209 19:02:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:33.209 19:02:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:33.209 19:02:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:33.209 19:02:18 -- common/autotest_common.sh@10 -- $ set +x 00:03:33.209 ************************************ 00:03:33.209 START TEST make 00:03:33.209 ************************************ 00:03:33.209 19:02:18 make -- common/autotest_common.sh@1129 -- $ make -j48 00:03:33.472 make[1]: Nothing to be done for 'all'. 00:03:36.034 The Meson build system 00:03:36.034 Version: 1.5.0 00:03:36.034 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:36.034 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:36.034 Build type: native build 00:03:36.034 Project name: libvfio-user 00:03:36.034 Project version: 0.0.1 00:03:36.034 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:36.034 C linker for the host machine: cc ld.bfd 2.40-14 00:03:36.034 Host machine cpu family: x86_64 00:03:36.034 Host machine cpu: x86_64 00:03:36.034 Run-time dependency threads found: YES 00:03:36.034 Library dl found: YES 00:03:36.034 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:36.034 Run-time dependency json-c found: YES 0.17 00:03:36.034 Run-time dependency cmocka found: YES 1.1.7 00:03:36.034 Program pytest-3 found: NO 00:03:36.034 Program flake8 found: NO 00:03:36.034 Program misspell-fixer found: NO 00:03:36.034 Program restructuredtext-lint found: NO 00:03:36.034 Program valgrind found: YES (/usr/bin/valgrind) 00:03:36.034 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:36.034 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:36.034 Compiler for C supports arguments -Wwrite-strings: YES 00:03:36.034 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:36.034 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:36.034 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:36.034 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:36.034 Build targets in project: 8 00:03:36.034 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:36.034 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:36.034 00:03:36.034 libvfio-user 0.0.1 00:03:36.034 00:03:36.034 User defined options 00:03:36.034 buildtype : debug 00:03:36.034 default_library: shared 00:03:36.034 libdir : /usr/local/lib 00:03:36.034 00:03:36.034 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:36.990 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:36.990 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:36.990 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:36.990 [3/37] Compiling C object samples/null.p/null.c.o 00:03:36.990 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:36.990 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:36.990 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:36.990 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:36.990 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:36.990 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:36.990 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:36.990 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:36.990 [12/37] Compiling C object samples/server.p/server.c.o 00:03:36.990 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:36.990 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:36.990 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:36.990 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:36.990 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:36.990 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:36.990 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:36.990 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:36.990 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:36.990 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:36.990 [23/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:36.990 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:37.255 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:37.255 [26/37] Compiling C object samples/client.p/client.c.o 00:03:37.255 [27/37] Linking target samples/client 00:03:37.255 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:37.255 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:37.255 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:37.520 [31/37] Linking target test/unit_tests 00:03:37.520 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:37.781 [33/37] Linking target samples/server 00:03:37.781 [34/37] Linking target samples/gpio-pci-idio-16 00:03:37.781 [35/37] Linking target samples/null 00:03:37.781 [36/37] Linking target samples/lspci 00:03:37.781 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:37.781 INFO: autodetecting backend as ninja 00:03:37.781 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:37.781 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:38.350 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:38.350 ninja: no work to do. 00:03:42.539 The Meson build system 00:03:42.539 Version: 1.5.0 00:03:42.539 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:42.539 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:42.539 Build type: native build 00:03:42.539 Program cat found: YES (/usr/bin/cat) 00:03:42.539 Project name: DPDK 00:03:42.539 Project version: 24.03.0 00:03:42.539 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:42.539 C linker for the host machine: cc ld.bfd 2.40-14 00:03:42.539 Host machine cpu family: x86_64 00:03:42.539 Host machine cpu: x86_64 00:03:42.539 Message: ## Building in Developer Mode ## 00:03:42.539 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:42.539 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:42.539 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:42.539 Program python3 found: YES (/usr/bin/python3) 00:03:42.539 Program cat found: YES (/usr/bin/cat) 00:03:42.539 Compiler for C supports arguments -march=native: YES 00:03:42.539 Checking for size of "void *" : 8 00:03:42.539 Checking for size of "void *" : 8 (cached) 00:03:42.539 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:42.539 Library m found: YES 00:03:42.539 Library numa found: YES 00:03:42.539 Has header "numaif.h" : YES 00:03:42.539 Library fdt found: NO 00:03:42.539 Library execinfo found: NO 00:03:42.539 Has header "execinfo.h" : YES 00:03:42.539 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:42.539 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:42.539 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:42.539 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:42.539 Run-time dependency openssl found: YES 3.1.1 00:03:42.539 Run-time dependency libpcap found: YES 1.10.4 00:03:42.539 Has header "pcap.h" with dependency libpcap: YES 00:03:42.539 Compiler for C supports arguments -Wcast-qual: YES 00:03:42.539 Compiler for C supports arguments -Wdeprecated: YES 00:03:42.539 Compiler for C supports arguments -Wformat: YES 00:03:42.539 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:42.539 Compiler for C supports arguments -Wformat-security: NO 00:03:42.539 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:42.539 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:42.539 Compiler for C supports arguments -Wnested-externs: YES 00:03:42.539 Compiler for C supports arguments -Wold-style-definition: YES 00:03:42.539 Compiler for C supports arguments -Wpointer-arith: YES 00:03:42.539 Compiler for C supports arguments -Wsign-compare: YES 00:03:42.539 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:42.539 Compiler for C supports arguments -Wundef: YES 00:03:42.539 Compiler for C supports arguments -Wwrite-strings: YES 00:03:42.539 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:42.539 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:42.539 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:42.539 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:42.539 Program objdump found: YES (/usr/bin/objdump) 00:03:42.539 Compiler for C supports arguments -mavx512f: YES 00:03:42.539 Checking if "AVX512 checking" compiles: YES 00:03:42.539 Fetching value of define "__SSE4_2__" : 1 00:03:42.539 Fetching value of define "__AES__" : 1 00:03:42.539 Fetching value of define "__AVX__" : 1 00:03:42.539 Fetching value of define "__AVX2__" : (undefined) 00:03:42.539 Fetching value of define "__AVX512BW__" : (undefined) 00:03:42.539 Fetching value of define "__AVX512CD__" : (undefined) 00:03:42.539 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:42.539 Fetching value of define "__AVX512F__" : (undefined) 00:03:42.539 Fetching value of define "__AVX512VL__" : (undefined) 00:03:42.539 Fetching value of define "__PCLMUL__" : 1 00:03:42.539 Fetching value of define "__RDRND__" : 1 00:03:42.539 Fetching value of define "__RDSEED__" : (undefined) 00:03:42.539 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:42.539 Fetching value of define "__znver1__" : (undefined) 00:03:42.539 Fetching value of define "__znver2__" : (undefined) 00:03:42.539 Fetching value of define "__znver3__" : (undefined) 00:03:42.539 Fetching value of define "__znver4__" : (undefined) 00:03:42.539 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:42.539 Message: lib/log: Defining dependency "log" 00:03:42.539 Message: lib/kvargs: Defining dependency "kvargs" 00:03:42.539 Message: lib/telemetry: Defining dependency "telemetry" 00:03:42.539 Checking for function "getentropy" : NO 00:03:42.539 Message: lib/eal: Defining dependency "eal" 00:03:42.539 Message: lib/ring: Defining dependency "ring" 00:03:42.539 Message: lib/rcu: Defining dependency "rcu" 00:03:42.539 Message: lib/mempool: Defining dependency "mempool" 00:03:42.539 Message: lib/mbuf: Defining dependency "mbuf" 00:03:42.539 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:42.539 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:42.539 Compiler for C supports arguments -mpclmul: YES 00:03:42.539 Compiler for C supports arguments -maes: YES 00:03:42.539 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:42.539 Compiler for C supports arguments -mavx512bw: YES 00:03:42.539 Compiler for C supports arguments -mavx512dq: YES 00:03:42.539 Compiler for C supports arguments -mavx512vl: YES 00:03:42.539 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:42.539 Compiler for C supports arguments -mavx2: YES 00:03:42.539 Compiler for C supports arguments -mavx: YES 00:03:42.539 Message: lib/net: Defining dependency "net" 00:03:42.539 Message: lib/meter: Defining dependency "meter" 00:03:42.539 Message: lib/ethdev: Defining dependency "ethdev" 00:03:42.539 Message: lib/pci: Defining dependency "pci" 00:03:42.539 Message: lib/cmdline: Defining dependency "cmdline" 00:03:42.539 Message: lib/hash: Defining dependency "hash" 00:03:42.539 Message: lib/timer: Defining dependency "timer" 00:03:42.539 Message: lib/compressdev: Defining dependency "compressdev" 00:03:42.539 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:42.539 Message: lib/dmadev: Defining dependency "dmadev" 00:03:42.539 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:42.539 Message: lib/power: Defining dependency "power" 00:03:42.539 Message: lib/reorder: Defining dependency "reorder" 00:03:42.539 Message: lib/security: Defining dependency "security" 00:03:42.539 Has header "linux/userfaultfd.h" : YES 00:03:42.539 Has header "linux/vduse.h" : YES 00:03:42.539 Message: lib/vhost: Defining dependency "vhost" 00:03:42.539 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:42.539 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:42.539 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:42.539 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:42.539 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:42.539 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:42.539 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:42.539 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:42.539 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:42.539 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:42.539 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:42.540 Configuring doxy-api-html.conf using configuration 00:03:42.540 Configuring doxy-api-man.conf using configuration 00:03:42.540 Program mandb found: YES (/usr/bin/mandb) 00:03:42.540 Program sphinx-build found: NO 00:03:42.540 Configuring rte_build_config.h using configuration 00:03:42.540 Message: 00:03:42.540 ================= 00:03:42.540 Applications Enabled 00:03:42.540 ================= 00:03:42.540 00:03:42.540 apps: 00:03:42.540 00:03:42.540 00:03:42.540 Message: 00:03:42.540 ================= 00:03:42.540 Libraries Enabled 00:03:42.540 ================= 00:03:42.540 00:03:42.540 libs: 00:03:42.540 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:42.540 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:42.540 cryptodev, dmadev, power, reorder, security, vhost, 00:03:42.540 00:03:42.540 Message: 00:03:42.540 =============== 00:03:42.540 Drivers Enabled 00:03:42.540 =============== 00:03:42.540 00:03:42.540 common: 00:03:42.540 00:03:42.540 bus: 00:03:42.540 pci, vdev, 00:03:42.540 mempool: 00:03:42.540 ring, 00:03:42.540 dma: 00:03:42.540 00:03:42.540 net: 00:03:42.540 00:03:42.540 crypto: 00:03:42.540 00:03:42.540 compress: 00:03:42.540 00:03:42.540 vdpa: 00:03:42.540 00:03:42.540 00:03:42.540 Message: 00:03:42.540 ================= 00:03:42.540 Content Skipped 00:03:42.540 ================= 00:03:42.540 00:03:42.540 apps: 00:03:42.540 dumpcap: explicitly disabled via build config 00:03:42.540 graph: explicitly disabled via build config 00:03:42.540 pdump: explicitly disabled via build config 00:03:42.540 proc-info: explicitly disabled via build config 00:03:42.540 test-acl: explicitly disabled via build config 00:03:42.540 test-bbdev: explicitly disabled via build config 00:03:42.540 test-cmdline: explicitly disabled via build config 00:03:42.540 test-compress-perf: explicitly disabled via build config 00:03:42.540 test-crypto-perf: explicitly disabled via build config 00:03:42.540 test-dma-perf: explicitly disabled via build config 00:03:42.540 test-eventdev: explicitly disabled via build config 00:03:42.540 test-fib: explicitly disabled via build config 00:03:42.540 test-flow-perf: explicitly disabled via build config 00:03:42.540 test-gpudev: explicitly disabled via build config 00:03:42.540 test-mldev: explicitly disabled via build config 00:03:42.540 test-pipeline: explicitly disabled via build config 00:03:42.540 test-pmd: explicitly disabled via build config 00:03:42.540 test-regex: explicitly disabled via build config 00:03:42.540 test-sad: explicitly disabled via build config 00:03:42.540 test-security-perf: explicitly disabled via build config 00:03:42.540 00:03:42.540 libs: 00:03:42.540 argparse: explicitly disabled via build config 00:03:42.540 metrics: explicitly disabled via build config 00:03:42.540 acl: explicitly disabled via build config 00:03:42.540 bbdev: explicitly disabled via build config 00:03:42.540 bitratestats: explicitly disabled via build config 00:03:42.540 bpf: explicitly disabled via build config 00:03:42.540 cfgfile: explicitly disabled via build config 00:03:42.540 distributor: explicitly disabled via build config 00:03:42.540 efd: explicitly disabled via build config 00:03:42.540 eventdev: explicitly disabled via build config 00:03:42.540 dispatcher: explicitly disabled via build config 00:03:42.540 gpudev: explicitly disabled via build config 00:03:42.540 gro: explicitly disabled via build config 00:03:42.540 gso: explicitly disabled via build config 00:03:42.540 ip_frag: explicitly disabled via build config 00:03:42.540 jobstats: explicitly disabled via build config 00:03:42.540 latencystats: explicitly disabled via build config 00:03:42.540 lpm: explicitly disabled via build config 00:03:42.540 member: explicitly disabled via build config 00:03:42.540 pcapng: explicitly disabled via build config 00:03:42.540 rawdev: explicitly disabled via build config 00:03:42.540 regexdev: explicitly disabled via build config 00:03:42.540 mldev: explicitly disabled via build config 00:03:42.540 rib: explicitly disabled via build config 00:03:42.540 sched: explicitly disabled via build config 00:03:42.540 stack: explicitly disabled via build config 00:03:42.540 ipsec: explicitly disabled via build config 00:03:42.540 pdcp: explicitly disabled via build config 00:03:42.540 fib: explicitly disabled via build config 00:03:42.540 port: explicitly disabled via build config 00:03:42.540 pdump: explicitly disabled via build config 00:03:42.540 table: explicitly disabled via build config 00:03:42.540 pipeline: explicitly disabled via build config 00:03:42.540 graph: explicitly disabled via build config 00:03:42.540 node: explicitly disabled via build config 00:03:42.540 00:03:42.540 drivers: 00:03:42.540 common/cpt: not in enabled drivers build config 00:03:42.540 common/dpaax: not in enabled drivers build config 00:03:42.540 common/iavf: not in enabled drivers build config 00:03:42.540 common/idpf: not in enabled drivers build config 00:03:42.540 common/ionic: not in enabled drivers build config 00:03:42.540 common/mvep: not in enabled drivers build config 00:03:42.540 common/octeontx: not in enabled drivers build config 00:03:42.540 bus/auxiliary: not in enabled drivers build config 00:03:42.540 bus/cdx: not in enabled drivers build config 00:03:42.540 bus/dpaa: not in enabled drivers build config 00:03:42.540 bus/fslmc: not in enabled drivers build config 00:03:42.540 bus/ifpga: not in enabled drivers build config 00:03:42.540 bus/platform: not in enabled drivers build config 00:03:42.540 bus/uacce: not in enabled drivers build config 00:03:42.540 bus/vmbus: not in enabled drivers build config 00:03:42.540 common/cnxk: not in enabled drivers build config 00:03:42.540 common/mlx5: not in enabled drivers build config 00:03:42.540 common/nfp: not in enabled drivers build config 00:03:42.540 common/nitrox: not in enabled drivers build config 00:03:42.540 common/qat: not in enabled drivers build config 00:03:42.540 common/sfc_efx: not in enabled drivers build config 00:03:42.540 mempool/bucket: not in enabled drivers build config 00:03:42.540 mempool/cnxk: not in enabled drivers build config 00:03:42.540 mempool/dpaa: not in enabled drivers build config 00:03:42.540 mempool/dpaa2: not in enabled drivers build config 00:03:42.540 mempool/octeontx: not in enabled drivers build config 00:03:42.540 mempool/stack: not in enabled drivers build config 00:03:42.540 dma/cnxk: not in enabled drivers build config 00:03:42.540 dma/dpaa: not in enabled drivers build config 00:03:42.540 dma/dpaa2: not in enabled drivers build config 00:03:42.540 dma/hisilicon: not in enabled drivers build config 00:03:42.540 dma/idxd: not in enabled drivers build config 00:03:42.540 dma/ioat: not in enabled drivers build config 00:03:42.540 dma/skeleton: not in enabled drivers build config 00:03:42.540 net/af_packet: not in enabled drivers build config 00:03:42.540 net/af_xdp: not in enabled drivers build config 00:03:42.540 net/ark: not in enabled drivers build config 00:03:42.540 net/atlantic: not in enabled drivers build config 00:03:42.540 net/avp: not in enabled drivers build config 00:03:42.540 net/axgbe: not in enabled drivers build config 00:03:42.540 net/bnx2x: not in enabled drivers build config 00:03:42.540 net/bnxt: not in enabled drivers build config 00:03:42.540 net/bonding: not in enabled drivers build config 00:03:42.540 net/cnxk: not in enabled drivers build config 00:03:42.540 net/cpfl: not in enabled drivers build config 00:03:42.540 net/cxgbe: not in enabled drivers build config 00:03:42.540 net/dpaa: not in enabled drivers build config 00:03:42.540 net/dpaa2: not in enabled drivers build config 00:03:42.540 net/e1000: not in enabled drivers build config 00:03:42.540 net/ena: not in enabled drivers build config 00:03:42.540 net/enetc: not in enabled drivers build config 00:03:42.540 net/enetfec: not in enabled drivers build config 00:03:42.540 net/enic: not in enabled drivers build config 00:03:42.540 net/failsafe: not in enabled drivers build config 00:03:42.540 net/fm10k: not in enabled drivers build config 00:03:42.540 net/gve: not in enabled drivers build config 00:03:42.540 net/hinic: not in enabled drivers build config 00:03:42.540 net/hns3: not in enabled drivers build config 00:03:42.540 net/i40e: not in enabled drivers build config 00:03:42.540 net/iavf: not in enabled drivers build config 00:03:42.540 net/ice: not in enabled drivers build config 00:03:42.540 net/idpf: not in enabled drivers build config 00:03:42.540 net/igc: not in enabled drivers build config 00:03:42.540 net/ionic: not in enabled drivers build config 00:03:42.540 net/ipn3ke: not in enabled drivers build config 00:03:42.540 net/ixgbe: not in enabled drivers build config 00:03:42.540 net/mana: not in enabled drivers build config 00:03:42.540 net/memif: not in enabled drivers build config 00:03:42.540 net/mlx4: not in enabled drivers build config 00:03:42.540 net/mlx5: not in enabled drivers build config 00:03:42.540 net/mvneta: not in enabled drivers build config 00:03:42.540 net/mvpp2: not in enabled drivers build config 00:03:42.540 net/netvsc: not in enabled drivers build config 00:03:42.540 net/nfb: not in enabled drivers build config 00:03:42.540 net/nfp: not in enabled drivers build config 00:03:42.540 net/ngbe: not in enabled drivers build config 00:03:42.541 net/null: not in enabled drivers build config 00:03:42.541 net/octeontx: not in enabled drivers build config 00:03:42.541 net/octeon_ep: not in enabled drivers build config 00:03:42.541 net/pcap: not in enabled drivers build config 00:03:42.541 net/pfe: not in enabled drivers build config 00:03:42.541 net/qede: not in enabled drivers build config 00:03:42.541 net/ring: not in enabled drivers build config 00:03:42.541 net/sfc: not in enabled drivers build config 00:03:42.541 net/softnic: not in enabled drivers build config 00:03:42.541 net/tap: not in enabled drivers build config 00:03:42.541 net/thunderx: not in enabled drivers build config 00:03:42.541 net/txgbe: not in enabled drivers build config 00:03:42.541 net/vdev_netvsc: not in enabled drivers build config 00:03:42.541 net/vhost: not in enabled drivers build config 00:03:42.541 net/virtio: not in enabled drivers build config 00:03:42.541 net/vmxnet3: not in enabled drivers build config 00:03:42.541 raw/*: missing internal dependency, "rawdev" 00:03:42.541 crypto/armv8: not in enabled drivers build config 00:03:42.541 crypto/bcmfs: not in enabled drivers build config 00:03:42.541 crypto/caam_jr: not in enabled drivers build config 00:03:42.541 crypto/ccp: not in enabled drivers build config 00:03:42.541 crypto/cnxk: not in enabled drivers build config 00:03:42.541 crypto/dpaa_sec: not in enabled drivers build config 00:03:42.541 crypto/dpaa2_sec: not in enabled drivers build config 00:03:42.541 crypto/ipsec_mb: not in enabled drivers build config 00:03:42.541 crypto/mlx5: not in enabled drivers build config 00:03:42.541 crypto/mvsam: not in enabled drivers build config 00:03:42.541 crypto/nitrox: not in enabled drivers build config 00:03:42.541 crypto/null: not in enabled drivers build config 00:03:42.541 crypto/octeontx: not in enabled drivers build config 00:03:42.541 crypto/openssl: not in enabled drivers build config 00:03:42.541 crypto/scheduler: not in enabled drivers build config 00:03:42.541 crypto/uadk: not in enabled drivers build config 00:03:42.541 crypto/virtio: not in enabled drivers build config 00:03:42.541 compress/isal: not in enabled drivers build config 00:03:42.541 compress/mlx5: not in enabled drivers build config 00:03:42.541 compress/nitrox: not in enabled drivers build config 00:03:42.541 compress/octeontx: not in enabled drivers build config 00:03:42.541 compress/zlib: not in enabled drivers build config 00:03:42.541 regex/*: missing internal dependency, "regexdev" 00:03:42.541 ml/*: missing internal dependency, "mldev" 00:03:42.541 vdpa/ifc: not in enabled drivers build config 00:03:42.541 vdpa/mlx5: not in enabled drivers build config 00:03:42.541 vdpa/nfp: not in enabled drivers build config 00:03:42.541 vdpa/sfc: not in enabled drivers build config 00:03:42.541 event/*: missing internal dependency, "eventdev" 00:03:42.541 baseband/*: missing internal dependency, "bbdev" 00:03:42.541 gpu/*: missing internal dependency, "gpudev" 00:03:42.541 00:03:42.541 00:03:42.800 Build targets in project: 85 00:03:42.800 00:03:42.800 DPDK 24.03.0 00:03:42.800 00:03:42.800 User defined options 00:03:42.800 buildtype : debug 00:03:42.800 default_library : shared 00:03:42.800 libdir : lib 00:03:42.800 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:42.800 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:42.800 c_link_args : 00:03:42.800 cpu_instruction_set: native 00:03:42.800 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:42.800 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:42.800 enable_docs : false 00:03:42.800 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:42.800 enable_kmods : false 00:03:42.800 max_lcores : 128 00:03:42.800 tests : false 00:03:42.800 00:03:42.800 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:43.376 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:43.376 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:43.376 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:43.376 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:43.376 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:43.376 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:43.376 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:43.376 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:43.376 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:43.376 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:43.376 [10/268] Linking static target lib/librte_kvargs.a 00:03:43.376 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:43.376 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:43.376 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:43.640 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:43.640 [15/268] Linking static target lib/librte_log.a 00:03:43.640 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:44.223 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.223 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:44.223 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:44.223 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:44.223 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:44.223 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:44.223 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:44.223 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:44.223 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:44.223 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:44.223 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:44.223 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:44.223 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:44.223 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:44.223 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:44.223 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:44.489 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:44.489 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:44.489 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:44.489 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:44.489 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:44.489 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:44.489 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:44.489 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:44.489 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:44.489 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:44.489 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:44.489 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:44.489 [45/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:44.489 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:44.489 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:44.489 [48/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:44.489 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:44.489 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:44.489 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:44.489 [52/268] Linking static target lib/librte_telemetry.a 00:03:44.489 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:44.489 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:44.489 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:44.489 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:44.489 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:44.489 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:44.489 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:44.489 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:44.489 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:44.489 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:44.489 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:44.489 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:44.754 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:44.754 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.754 [67/268] Linking target lib/librte_log.so.24.1 00:03:45.019 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:45.019 [69/268] Linking static target lib/librte_pci.a 00:03:45.019 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:45.019 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:45.281 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:45.281 [73/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:45.281 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:45.281 [75/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:45.281 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:45.281 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:45.281 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:45.281 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:45.281 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:45.281 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:45.281 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:45.281 [83/268] Linking target lib/librte_kvargs.so.24.1 00:03:45.281 [84/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:45.281 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:45.282 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:45.282 [87/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:45.282 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:45.282 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:45.282 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:45.282 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:45.282 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:45.282 [93/268] Linking static target lib/librte_ring.a 00:03:45.282 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:45.282 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:45.545 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:45.545 [97/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.545 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:45.545 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:45.545 [100/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.545 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:45.545 [102/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:45.545 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:45.545 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:45.545 [105/268] Linking static target lib/librte_meter.a 00:03:45.545 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:45.545 [107/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:45.545 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:45.545 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:45.545 [110/268] Linking target lib/librte_telemetry.so.24.1 00:03:45.545 [111/268] Linking static target lib/librte_eal.a 00:03:45.545 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:45.545 [113/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:45.545 [114/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:45.545 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:45.545 [116/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:45.545 [117/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:45.545 [118/268] Linking static target lib/librte_rcu.a 00:03:45.545 [119/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:45.809 [120/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:45.809 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:45.809 [122/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:45.809 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:45.809 [124/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:45.809 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:45.809 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:45.809 [127/268] Linking static target lib/librte_mempool.a 00:03:45.809 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:45.809 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:45.809 [130/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:45.809 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:45.809 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:45.809 [133/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:45.809 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:46.081 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:46.081 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:46.081 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:46.082 [138/268] Linking static target lib/librte_net.a 00:03:46.082 [139/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:46.082 [140/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.082 [141/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.342 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:46.342 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:46.342 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:46.342 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:46.342 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:46.342 [147/268] Linking static target lib/librte_cmdline.a 00:03:46.342 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.342 [149/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:46.342 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:46.342 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:46.342 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:46.342 [153/268] Linking static target lib/librte_timer.a 00:03:46.342 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:46.342 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:46.602 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:46.602 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.602 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:46.602 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:46.602 [160/268] Linking static target lib/librte_dmadev.a 00:03:46.602 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:46.602 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:46.602 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:46.602 [164/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:46.602 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:46.602 [166/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:46.602 [167/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:46.861 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.861 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:46.862 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:46.862 [171/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.862 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:46.862 [173/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:46.862 [174/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:46.862 [175/268] Linking static target lib/librte_power.a 00:03:46.862 [176/268] Linking static target lib/librte_compressdev.a 00:03:46.862 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:46.862 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:46.862 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:46.862 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:47.121 [181/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:47.121 [182/268] Linking static target lib/librte_hash.a 00:03:47.121 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.121 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:47.121 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:47.121 [186/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:47.121 [187/268] Linking static target lib/librte_mbuf.a 00:03:47.121 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:47.121 [189/268] Linking static target lib/librte_reorder.a 00:03:47.121 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:47.121 [191/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.121 [192/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:47.121 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:47.121 [194/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:47.121 [195/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:47.121 [196/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:47.381 [197/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.381 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:47.381 [199/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:47.381 [200/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:47.381 [201/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.381 [202/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.640 [203/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:47.640 [204/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:47.640 [205/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:47.640 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:47.640 [207/268] Linking static target drivers/librte_bus_vdev.a 00:03:47.640 [208/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:47.640 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:47.640 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:47.640 [211/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.640 [212/268] Linking static target drivers/librte_bus_pci.a 00:03:47.640 [213/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:47.640 [214/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.640 [215/268] Linking static target lib/librte_security.a 00:03:47.640 [216/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:47.640 [217/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:47.640 [218/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:47.640 [219/268] Linking static target drivers/librte_mempool_ring.a 00:03:47.640 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:47.640 [221/268] Linking static target lib/librte_ethdev.a 00:03:47.640 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:47.899 [223/268] Linking static target lib/librte_cryptodev.a 00:03:47.899 [224/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.899 [225/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.899 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.837 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.216 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:52.143 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.143 [230/268] Linking target lib/librte_eal.so.24.1 00:03:52.143 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:52.143 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.143 [233/268] Linking target lib/librte_meter.so.24.1 00:03:52.143 [234/268] Linking target lib/librte_ring.so.24.1 00:03:52.143 [235/268] Linking target lib/librte_timer.so.24.1 00:03:52.143 [236/268] Linking target lib/librte_pci.so.24.1 00:03:52.143 [237/268] Linking target lib/librte_dmadev.so.24.1 00:03:52.143 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:52.143 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:52.143 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:52.143 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:52.143 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:52.143 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:52.143 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:52.143 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:52.143 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:52.402 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:52.402 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:52.402 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:52.402 [250/268] Linking target lib/librte_mbuf.so.24.1 00:03:52.402 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:52.661 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:52.661 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:52.661 [254/268] Linking target lib/librte_net.so.24.1 00:03:52.661 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:52.661 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:52.661 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:52.661 [258/268] Linking target lib/librte_hash.so.24.1 00:03:52.661 [259/268] Linking target lib/librte_cmdline.so.24.1 00:03:52.661 [260/268] Linking target lib/librte_security.so.24.1 00:03:52.661 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:52.919 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:52.919 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:52.919 [264/268] Linking target lib/librte_power.so.24.1 00:03:56.205 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:56.205 [266/268] Linking static target lib/librte_vhost.a 00:03:57.141 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.141 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:57.141 INFO: autodetecting backend as ninja 00:03:57.141 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:04:19.071 CC lib/log/log.o 00:04:19.071 CC lib/ut_mock/mock.o 00:04:19.071 CC lib/ut/ut.o 00:04:19.071 CC lib/log/log_flags.o 00:04:19.071 CC lib/log/log_deprecated.o 00:04:19.071 LIB libspdk_ut.a 00:04:19.071 LIB libspdk_log.a 00:04:19.071 LIB libspdk_ut_mock.a 00:04:19.071 SO libspdk_ut.so.2.0 00:04:19.071 SO libspdk_ut_mock.so.6.0 00:04:19.071 SO libspdk_log.so.7.1 00:04:19.071 SYMLINK libspdk_ut.so 00:04:19.071 SYMLINK libspdk_ut_mock.so 00:04:19.071 SYMLINK libspdk_log.so 00:04:19.071 CXX lib/trace_parser/trace.o 00:04:19.071 CC lib/dma/dma.o 00:04:19.071 CC lib/ioat/ioat.o 00:04:19.071 CC lib/util/base64.o 00:04:19.071 CC lib/util/bit_array.o 00:04:19.071 CC lib/util/cpuset.o 00:04:19.071 CC lib/util/crc16.o 00:04:19.071 CC lib/util/crc32.o 00:04:19.071 CC lib/util/crc32c.o 00:04:19.071 CC lib/util/crc32_ieee.o 00:04:19.071 CC lib/util/crc64.o 00:04:19.071 CC lib/util/dif.o 00:04:19.071 CC lib/util/fd.o 00:04:19.071 CC lib/util/fd_group.o 00:04:19.071 CC lib/util/file.o 00:04:19.071 CC lib/util/hexlify.o 00:04:19.071 CC lib/util/iov.o 00:04:19.071 CC lib/util/math.o 00:04:19.071 CC lib/util/net.o 00:04:19.071 CC lib/util/pipe.o 00:04:19.071 CC lib/util/strerror_tls.o 00:04:19.071 CC lib/util/string.o 00:04:19.071 CC lib/util/uuid.o 00:04:19.071 CC lib/util/xor.o 00:04:19.071 CC lib/util/zipf.o 00:04:19.071 CC lib/util/md5.o 00:04:19.071 CC lib/vfio_user/host/vfio_user_pci.o 00:04:19.071 CC lib/vfio_user/host/vfio_user.o 00:04:19.071 LIB libspdk_dma.a 00:04:19.071 SO libspdk_dma.so.5.0 00:04:19.071 LIB libspdk_ioat.a 00:04:19.071 SO libspdk_ioat.so.7.0 00:04:19.071 SYMLINK libspdk_dma.so 00:04:19.071 SYMLINK libspdk_ioat.so 00:04:19.071 LIB libspdk_vfio_user.a 00:04:19.071 SO libspdk_vfio_user.so.5.0 00:04:19.071 SYMLINK libspdk_vfio_user.so 00:04:19.071 LIB libspdk_util.a 00:04:19.071 SO libspdk_util.so.10.1 00:04:19.071 SYMLINK libspdk_util.so 00:04:19.071 CC lib/conf/conf.o 00:04:19.071 CC lib/idxd/idxd.o 00:04:19.071 CC lib/json/json_parse.o 00:04:19.071 CC lib/idxd/idxd_user.o 00:04:19.071 CC lib/idxd/idxd_kernel.o 00:04:19.071 CC lib/vmd/vmd.o 00:04:19.071 CC lib/json/json_util.o 00:04:19.071 CC lib/rdma_utils/rdma_utils.o 00:04:19.071 CC lib/json/json_write.o 00:04:19.071 CC lib/vmd/led.o 00:04:19.071 CC lib/env_dpdk/env.o 00:04:19.071 CC lib/env_dpdk/memory.o 00:04:19.071 CC lib/env_dpdk/pci.o 00:04:19.071 CC lib/env_dpdk/init.o 00:04:19.071 CC lib/env_dpdk/threads.o 00:04:19.071 CC lib/env_dpdk/pci_ioat.o 00:04:19.071 CC lib/env_dpdk/pci_virtio.o 00:04:19.071 CC lib/env_dpdk/pci_vmd.o 00:04:19.071 CC lib/env_dpdk/pci_idxd.o 00:04:19.071 CC lib/env_dpdk/pci_event.o 00:04:19.071 CC lib/env_dpdk/sigbus_handler.o 00:04:19.071 CC lib/env_dpdk/pci_dpdk.o 00:04:19.071 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:19.071 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:19.071 LIB libspdk_conf.a 00:04:19.071 LIB libspdk_rdma_utils.a 00:04:19.071 SO libspdk_conf.so.6.0 00:04:19.071 LIB libspdk_json.a 00:04:19.071 SO libspdk_rdma_utils.so.1.0 00:04:19.071 SO libspdk_json.so.6.0 00:04:19.071 SYMLINK libspdk_conf.so 00:04:19.071 SYMLINK libspdk_rdma_utils.so 00:04:19.071 SYMLINK libspdk_json.so 00:04:19.071 CC lib/rdma_provider/common.o 00:04:19.071 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:19.071 CC lib/jsonrpc/jsonrpc_server.o 00:04:19.071 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:19.071 CC lib/jsonrpc/jsonrpc_client.o 00:04:19.071 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:19.071 LIB libspdk_idxd.a 00:04:19.071 SO libspdk_idxd.so.12.1 00:04:19.071 LIB libspdk_vmd.a 00:04:19.071 SO libspdk_vmd.so.6.0 00:04:19.071 SYMLINK libspdk_idxd.so 00:04:19.071 SYMLINK libspdk_vmd.so 00:04:19.071 LIB libspdk_rdma_provider.a 00:04:19.071 SO libspdk_rdma_provider.so.7.0 00:04:19.071 SYMLINK libspdk_rdma_provider.so 00:04:19.071 LIB libspdk_jsonrpc.a 00:04:19.071 SO libspdk_jsonrpc.so.6.0 00:04:19.071 LIB libspdk_trace_parser.a 00:04:19.071 SO libspdk_trace_parser.so.6.0 00:04:19.071 SYMLINK libspdk_jsonrpc.so 00:04:19.071 SYMLINK libspdk_trace_parser.so 00:04:19.071 CC lib/rpc/rpc.o 00:04:19.330 LIB libspdk_rpc.a 00:04:19.330 SO libspdk_rpc.so.6.0 00:04:19.330 SYMLINK libspdk_rpc.so 00:04:19.588 CC lib/keyring/keyring.o 00:04:19.588 CC lib/keyring/keyring_rpc.o 00:04:19.588 CC lib/trace/trace.o 00:04:19.588 CC lib/notify/notify.o 00:04:19.588 CC lib/trace/trace_flags.o 00:04:19.588 CC lib/notify/notify_rpc.o 00:04:19.588 CC lib/trace/trace_rpc.o 00:04:19.588 LIB libspdk_notify.a 00:04:19.588 SO libspdk_notify.so.6.0 00:04:19.847 LIB libspdk_keyring.a 00:04:19.847 SYMLINK libspdk_notify.so 00:04:19.847 LIB libspdk_trace.a 00:04:19.847 SO libspdk_keyring.so.2.0 00:04:19.847 SO libspdk_trace.so.11.0 00:04:19.847 SYMLINK libspdk_keyring.so 00:04:19.847 SYMLINK libspdk_trace.so 00:04:19.847 LIB libspdk_env_dpdk.a 00:04:20.106 CC lib/thread/thread.o 00:04:20.106 CC lib/thread/iobuf.o 00:04:20.106 CC lib/sock/sock.o 00:04:20.106 CC lib/sock/sock_rpc.o 00:04:20.106 SO libspdk_env_dpdk.so.15.1 00:04:20.106 SYMLINK libspdk_env_dpdk.so 00:04:20.365 LIB libspdk_sock.a 00:04:20.365 SO libspdk_sock.so.10.0 00:04:20.365 SYMLINK libspdk_sock.so 00:04:20.623 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:20.623 CC lib/nvme/nvme_ctrlr.o 00:04:20.623 CC lib/nvme/nvme_fabric.o 00:04:20.623 CC lib/nvme/nvme_ns_cmd.o 00:04:20.623 CC lib/nvme/nvme_ns.o 00:04:20.623 CC lib/nvme/nvme_pcie_common.o 00:04:20.623 CC lib/nvme/nvme_pcie.o 00:04:20.623 CC lib/nvme/nvme_qpair.o 00:04:20.623 CC lib/nvme/nvme.o 00:04:20.623 CC lib/nvme/nvme_quirks.o 00:04:20.623 CC lib/nvme/nvme_transport.o 00:04:20.623 CC lib/nvme/nvme_discovery.o 00:04:20.623 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:20.623 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:20.623 CC lib/nvme/nvme_tcp.o 00:04:20.623 CC lib/nvme/nvme_opal.o 00:04:20.623 CC lib/nvme/nvme_io_msg.o 00:04:20.623 CC lib/nvme/nvme_poll_group.o 00:04:20.623 CC lib/nvme/nvme_zns.o 00:04:20.623 CC lib/nvme/nvme_stubs.o 00:04:20.623 CC lib/nvme/nvme_auth.o 00:04:20.623 CC lib/nvme/nvme_cuse.o 00:04:20.623 CC lib/nvme/nvme_vfio_user.o 00:04:20.623 CC lib/nvme/nvme_rdma.o 00:04:21.560 LIB libspdk_thread.a 00:04:21.560 SO libspdk_thread.so.11.0 00:04:21.818 SYMLINK libspdk_thread.so 00:04:21.818 CC lib/accel/accel.o 00:04:21.818 CC lib/accel/accel_rpc.o 00:04:21.818 CC lib/accel/accel_sw.o 00:04:21.818 CC lib/fsdev/fsdev.o 00:04:21.818 CC lib/fsdev/fsdev_io.o 00:04:21.818 CC lib/fsdev/fsdev_rpc.o 00:04:21.818 CC lib/vfu_tgt/tgt_endpoint.o 00:04:21.818 CC lib/blob/blobstore.o 00:04:21.818 CC lib/init/json_config.o 00:04:21.818 CC lib/virtio/virtio.o 00:04:21.818 CC lib/vfu_tgt/tgt_rpc.o 00:04:21.818 CC lib/blob/request.o 00:04:21.818 CC lib/init/subsystem.o 00:04:21.818 CC lib/virtio/virtio_vhost_user.o 00:04:21.818 CC lib/blob/zeroes.o 00:04:21.818 CC lib/init/subsystem_rpc.o 00:04:21.818 CC lib/virtio/virtio_vfio_user.o 00:04:21.818 CC lib/blob/blob_bs_dev.o 00:04:21.818 CC lib/init/rpc.o 00:04:21.818 CC lib/virtio/virtio_pci.o 00:04:22.077 LIB libspdk_init.a 00:04:22.077 SO libspdk_init.so.6.0 00:04:22.336 SYMLINK libspdk_init.so 00:04:22.336 LIB libspdk_virtio.a 00:04:22.336 LIB libspdk_vfu_tgt.a 00:04:22.336 SO libspdk_vfu_tgt.so.3.0 00:04:22.336 SO libspdk_virtio.so.7.0 00:04:22.336 SYMLINK libspdk_vfu_tgt.so 00:04:22.336 SYMLINK libspdk_virtio.so 00:04:22.336 CC lib/event/app.o 00:04:22.336 CC lib/event/reactor.o 00:04:22.336 CC lib/event/log_rpc.o 00:04:22.336 CC lib/event/app_rpc.o 00:04:22.336 CC lib/event/scheduler_static.o 00:04:22.612 LIB libspdk_fsdev.a 00:04:22.612 SO libspdk_fsdev.so.2.0 00:04:22.612 SYMLINK libspdk_fsdev.so 00:04:22.871 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:22.871 LIB libspdk_event.a 00:04:22.871 SO libspdk_event.so.14.0 00:04:22.871 SYMLINK libspdk_event.so 00:04:23.129 LIB libspdk_accel.a 00:04:23.129 SO libspdk_accel.so.16.0 00:04:23.129 SYMLINK libspdk_accel.so 00:04:23.387 CC lib/bdev/bdev.o 00:04:23.387 CC lib/bdev/bdev_rpc.o 00:04:23.387 CC lib/bdev/bdev_zone.o 00:04:23.387 CC lib/bdev/part.o 00:04:23.387 CC lib/bdev/scsi_nvme.o 00:04:23.387 LIB libspdk_nvme.a 00:04:23.646 LIB libspdk_fuse_dispatcher.a 00:04:23.646 SO libspdk_fuse_dispatcher.so.1.0 00:04:23.646 SO libspdk_nvme.so.15.0 00:04:23.646 SYMLINK libspdk_fuse_dispatcher.so 00:04:23.906 SYMLINK libspdk_nvme.so 00:04:25.284 LIB libspdk_blob.a 00:04:25.284 SO libspdk_blob.so.12.0 00:04:25.284 SYMLINK libspdk_blob.so 00:04:25.284 CC lib/lvol/lvol.o 00:04:25.284 CC lib/blobfs/blobfs.o 00:04:25.284 CC lib/blobfs/tree.o 00:04:26.220 LIB libspdk_bdev.a 00:04:26.220 SO libspdk_bdev.so.17.0 00:04:26.220 LIB libspdk_blobfs.a 00:04:26.220 SYMLINK libspdk_bdev.so 00:04:26.220 SO libspdk_blobfs.so.11.0 00:04:26.220 LIB libspdk_lvol.a 00:04:26.220 SYMLINK libspdk_blobfs.so 00:04:26.220 SO libspdk_lvol.so.11.0 00:04:26.487 SYMLINK libspdk_lvol.so 00:04:26.487 CC lib/nbd/nbd.o 00:04:26.487 CC lib/nbd/nbd_rpc.o 00:04:26.487 CC lib/scsi/dev.o 00:04:26.487 CC lib/ublk/ublk.o 00:04:26.487 CC lib/scsi/lun.o 00:04:26.487 CC lib/ublk/ublk_rpc.o 00:04:26.487 CC lib/scsi/port.o 00:04:26.487 CC lib/scsi/scsi.o 00:04:26.487 CC lib/scsi/scsi_bdev.o 00:04:26.487 CC lib/scsi/scsi_pr.o 00:04:26.487 CC lib/nvmf/ctrlr.o 00:04:26.487 CC lib/scsi/scsi_rpc.o 00:04:26.487 CC lib/scsi/task.o 00:04:26.487 CC lib/nvmf/ctrlr_discovery.o 00:04:26.487 CC lib/ftl/ftl_core.o 00:04:26.487 CC lib/nvmf/ctrlr_bdev.o 00:04:26.487 CC lib/ftl/ftl_init.o 00:04:26.487 CC lib/nvmf/subsystem.o 00:04:26.487 CC lib/ftl/ftl_layout.o 00:04:26.487 CC lib/nvmf/nvmf.o 00:04:26.487 CC lib/ftl/ftl_debug.o 00:04:26.487 CC lib/nvmf/nvmf_rpc.o 00:04:26.487 CC lib/ftl/ftl_io.o 00:04:26.487 CC lib/nvmf/transport.o 00:04:26.487 CC lib/ftl/ftl_sb.o 00:04:26.487 CC lib/nvmf/tcp.o 00:04:26.487 CC lib/nvmf/stubs.o 00:04:26.487 CC lib/ftl/ftl_l2p_flat.o 00:04:26.487 CC lib/ftl/ftl_l2p.o 00:04:26.487 CC lib/nvmf/mdns_server.o 00:04:26.487 CC lib/nvmf/vfio_user.o 00:04:26.487 CC lib/ftl/ftl_nv_cache.o 00:04:26.487 CC lib/nvmf/rdma.o 00:04:26.487 CC lib/ftl/ftl_band.o 00:04:26.487 CC lib/ftl/ftl_band_ops.o 00:04:26.487 CC lib/nvmf/auth.o 00:04:26.487 CC lib/ftl/ftl_writer.o 00:04:26.487 CC lib/ftl/ftl_rq.o 00:04:26.487 CC lib/ftl/ftl_reloc.o 00:04:26.487 CC lib/ftl/ftl_l2p_cache.o 00:04:26.487 CC lib/ftl/ftl_p2l.o 00:04:26.487 CC lib/ftl/ftl_p2l_log.o 00:04:26.487 CC lib/ftl/mngt/ftl_mngt.o 00:04:26.487 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:26.487 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:26.487 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:26.487 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:26.487 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:26.749 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:26.749 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:26.749 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:26.749 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:26.749 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:26.749 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:26.749 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:26.749 CC lib/ftl/utils/ftl_conf.o 00:04:26.749 CC lib/ftl/utils/ftl_md.o 00:04:26.749 CC lib/ftl/utils/ftl_mempool.o 00:04:27.013 CC lib/ftl/utils/ftl_bitmap.o 00:04:27.013 CC lib/ftl/utils/ftl_property.o 00:04:27.013 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:27.013 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:27.013 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:27.013 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:27.013 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:27.013 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:27.013 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:27.013 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:27.013 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:27.013 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:27.013 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:27.013 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:27.013 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:27.013 CC lib/ftl/base/ftl_base_dev.o 00:04:27.275 CC lib/ftl/base/ftl_base_bdev.o 00:04:27.275 CC lib/ftl/ftl_trace.o 00:04:27.275 LIB libspdk_nbd.a 00:04:27.275 SO libspdk_nbd.so.7.0 00:04:27.275 LIB libspdk_scsi.a 00:04:27.275 SYMLINK libspdk_nbd.so 00:04:27.275 SO libspdk_scsi.so.9.0 00:04:27.535 SYMLINK libspdk_scsi.so 00:04:27.535 LIB libspdk_ublk.a 00:04:27.535 SO libspdk_ublk.so.3.0 00:04:27.535 SYMLINK libspdk_ublk.so 00:04:27.535 CC lib/iscsi/conn.o 00:04:27.535 CC lib/vhost/vhost.o 00:04:27.535 CC lib/iscsi/init_grp.o 00:04:27.535 CC lib/vhost/vhost_rpc.o 00:04:27.535 CC lib/iscsi/iscsi.o 00:04:27.535 CC lib/vhost/vhost_scsi.o 00:04:27.535 CC lib/iscsi/param.o 00:04:27.535 CC lib/vhost/vhost_blk.o 00:04:27.535 CC lib/iscsi/portal_grp.o 00:04:27.535 CC lib/vhost/rte_vhost_user.o 00:04:27.535 CC lib/iscsi/tgt_node.o 00:04:27.535 CC lib/iscsi/iscsi_subsystem.o 00:04:27.535 CC lib/iscsi/iscsi_rpc.o 00:04:27.535 CC lib/iscsi/task.o 00:04:27.794 LIB libspdk_ftl.a 00:04:28.053 SO libspdk_ftl.so.9.0 00:04:28.313 SYMLINK libspdk_ftl.so 00:04:28.880 LIB libspdk_vhost.a 00:04:28.880 SO libspdk_vhost.so.8.0 00:04:29.145 SYMLINK libspdk_vhost.so 00:04:29.145 LIB libspdk_iscsi.a 00:04:29.145 LIB libspdk_nvmf.a 00:04:29.145 SO libspdk_iscsi.so.8.0 00:04:29.145 SO libspdk_nvmf.so.20.0 00:04:29.404 SYMLINK libspdk_iscsi.so 00:04:29.404 SYMLINK libspdk_nvmf.so 00:04:29.663 CC module/vfu_device/vfu_virtio.o 00:04:29.663 CC module/vfu_device/vfu_virtio_blk.o 00:04:29.663 CC module/vfu_device/vfu_virtio_scsi.o 00:04:29.663 CC module/vfu_device/vfu_virtio_rpc.o 00:04:29.663 CC module/env_dpdk/env_dpdk_rpc.o 00:04:29.663 CC module/vfu_device/vfu_virtio_fs.o 00:04:29.663 CC module/accel/ioat/accel_ioat.o 00:04:29.663 CC module/accel/ioat/accel_ioat_rpc.o 00:04:29.663 CC module/blob/bdev/blob_bdev.o 00:04:29.663 CC module/sock/posix/posix.o 00:04:29.663 CC module/accel/error/accel_error.o 00:04:29.663 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:29.663 CC module/accel/error/accel_error_rpc.o 00:04:29.663 CC module/fsdev/aio/fsdev_aio.o 00:04:29.663 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:29.663 CC module/keyring/linux/keyring.o 00:04:29.663 CC module/fsdev/aio/linux_aio_mgr.o 00:04:29.663 CC module/accel/iaa/accel_iaa.o 00:04:29.663 CC module/accel/dsa/accel_dsa.o 00:04:29.663 CC module/accel/iaa/accel_iaa_rpc.o 00:04:29.663 CC module/keyring/linux/keyring_rpc.o 00:04:29.663 CC module/keyring/file/keyring.o 00:04:29.663 CC module/accel/dsa/accel_dsa_rpc.o 00:04:29.663 CC module/scheduler/gscheduler/gscheduler.o 00:04:29.663 CC module/keyring/file/keyring_rpc.o 00:04:29.663 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:29.921 LIB libspdk_env_dpdk_rpc.a 00:04:29.921 SO libspdk_env_dpdk_rpc.so.6.0 00:04:29.921 SYMLINK libspdk_env_dpdk_rpc.so 00:04:29.921 LIB libspdk_keyring_file.a 00:04:29.921 LIB libspdk_keyring_linux.a 00:04:29.921 LIB libspdk_scheduler_dpdk_governor.a 00:04:29.921 LIB libspdk_scheduler_gscheduler.a 00:04:29.921 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:29.921 SO libspdk_keyring_file.so.2.0 00:04:29.921 SO libspdk_keyring_linux.so.1.0 00:04:29.921 SO libspdk_scheduler_gscheduler.so.4.0 00:04:29.921 LIB libspdk_scheduler_dynamic.a 00:04:29.921 LIB libspdk_accel_iaa.a 00:04:29.921 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:29.921 SYMLINK libspdk_keyring_file.so 00:04:29.921 SYMLINK libspdk_keyring_linux.so 00:04:29.921 SYMLINK libspdk_scheduler_gscheduler.so 00:04:29.921 SO libspdk_scheduler_dynamic.so.4.0 00:04:29.921 SO libspdk_accel_iaa.so.3.0 00:04:29.921 LIB libspdk_accel_ioat.a 00:04:30.179 SO libspdk_accel_ioat.so.6.0 00:04:30.179 LIB libspdk_accel_error.a 00:04:30.179 SYMLINK libspdk_scheduler_dynamic.so 00:04:30.179 LIB libspdk_blob_bdev.a 00:04:30.179 SYMLINK libspdk_accel_iaa.so 00:04:30.179 SO libspdk_accel_error.so.2.0 00:04:30.179 SO libspdk_blob_bdev.so.12.0 00:04:30.179 SYMLINK libspdk_accel_ioat.so 00:04:30.179 SYMLINK libspdk_blob_bdev.so 00:04:30.179 SYMLINK libspdk_accel_error.so 00:04:30.179 LIB libspdk_accel_dsa.a 00:04:30.179 SO libspdk_accel_dsa.so.5.0 00:04:30.179 SYMLINK libspdk_accel_dsa.so 00:04:30.439 LIB libspdk_vfu_device.a 00:04:30.439 SO libspdk_vfu_device.so.3.0 00:04:30.439 CC module/bdev/malloc/bdev_malloc.o 00:04:30.439 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:30.439 CC module/blobfs/bdev/blobfs_bdev.o 00:04:30.439 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:30.439 CC module/bdev/lvol/vbdev_lvol.o 00:04:30.439 CC module/bdev/iscsi/bdev_iscsi.o 00:04:30.439 CC module/bdev/null/bdev_null.o 00:04:30.439 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:30.439 CC module/bdev/ftl/bdev_ftl.o 00:04:30.439 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:30.439 CC module/bdev/null/bdev_null_rpc.o 00:04:30.439 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:30.439 CC module/bdev/gpt/gpt.o 00:04:30.439 CC module/bdev/gpt/vbdev_gpt.o 00:04:30.439 CC module/bdev/aio/bdev_aio.o 00:04:30.439 CC module/bdev/error/vbdev_error_rpc.o 00:04:30.439 CC module/bdev/error/vbdev_error.o 00:04:30.439 CC module/bdev/aio/bdev_aio_rpc.o 00:04:30.439 CC module/bdev/passthru/vbdev_passthru.o 00:04:30.439 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:30.439 CC module/bdev/delay/vbdev_delay.o 00:04:30.439 CC module/bdev/nvme/bdev_nvme.o 00:04:30.439 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:30.439 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:30.439 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:30.439 CC module/bdev/split/vbdev_split.o 00:04:30.439 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:30.439 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:30.439 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:30.439 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:30.439 CC module/bdev/split/vbdev_split_rpc.o 00:04:30.439 CC module/bdev/raid/bdev_raid.o 00:04:30.439 CC module/bdev/nvme/nvme_rpc.o 00:04:30.439 CC module/bdev/raid/bdev_raid_rpc.o 00:04:30.439 CC module/bdev/nvme/bdev_mdns_client.o 00:04:30.439 CC module/bdev/nvme/vbdev_opal.o 00:04:30.439 CC module/bdev/raid/bdev_raid_sb.o 00:04:30.439 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:30.439 CC module/bdev/raid/raid0.o 00:04:30.439 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:30.439 CC module/bdev/raid/raid1.o 00:04:30.439 CC module/bdev/raid/concat.o 00:04:30.439 SYMLINK libspdk_vfu_device.so 00:04:30.698 LIB libspdk_fsdev_aio.a 00:04:30.698 SO libspdk_fsdev_aio.so.1.0 00:04:30.698 LIB libspdk_blobfs_bdev.a 00:04:30.698 LIB libspdk_sock_posix.a 00:04:30.698 LIB libspdk_bdev_passthru.a 00:04:30.698 SO libspdk_sock_posix.so.6.0 00:04:30.698 SO libspdk_blobfs_bdev.so.6.0 00:04:30.957 SO libspdk_bdev_passthru.so.6.0 00:04:30.957 SYMLINK libspdk_fsdev_aio.so 00:04:30.957 LIB libspdk_bdev_null.a 00:04:30.957 SYMLINK libspdk_blobfs_bdev.so 00:04:30.957 LIB libspdk_bdev_error.a 00:04:30.957 LIB libspdk_bdev_split.a 00:04:30.957 SYMLINK libspdk_bdev_passthru.so 00:04:30.957 SO libspdk_bdev_null.so.6.0 00:04:30.957 SYMLINK libspdk_sock_posix.so 00:04:30.957 SO libspdk_bdev_error.so.6.0 00:04:30.957 SO libspdk_bdev_split.so.6.0 00:04:30.957 LIB libspdk_bdev_gpt.a 00:04:30.957 LIB libspdk_bdev_ftl.a 00:04:30.957 SYMLINK libspdk_bdev_null.so 00:04:30.957 LIB libspdk_bdev_aio.a 00:04:30.957 SO libspdk_bdev_gpt.so.6.0 00:04:30.957 SYMLINK libspdk_bdev_error.so 00:04:30.957 SYMLINK libspdk_bdev_split.so 00:04:30.957 SO libspdk_bdev_ftl.so.6.0 00:04:30.957 SO libspdk_bdev_aio.so.6.0 00:04:30.957 LIB libspdk_bdev_iscsi.a 00:04:30.957 LIB libspdk_bdev_zone_block.a 00:04:30.957 LIB libspdk_bdev_malloc.a 00:04:30.957 SO libspdk_bdev_iscsi.so.6.0 00:04:30.957 SYMLINK libspdk_bdev_gpt.so 00:04:30.957 SYMLINK libspdk_bdev_ftl.so 00:04:30.957 SO libspdk_bdev_zone_block.so.6.0 00:04:30.957 SO libspdk_bdev_malloc.so.6.0 00:04:30.957 SYMLINK libspdk_bdev_aio.so 00:04:30.957 SYMLINK libspdk_bdev_iscsi.so 00:04:31.219 LIB libspdk_bdev_delay.a 00:04:31.219 SYMLINK libspdk_bdev_zone_block.so 00:04:31.219 SYMLINK libspdk_bdev_malloc.so 00:04:31.219 SO libspdk_bdev_delay.so.6.0 00:04:31.219 SYMLINK libspdk_bdev_delay.so 00:04:31.219 LIB libspdk_bdev_virtio.a 00:04:31.219 LIB libspdk_bdev_lvol.a 00:04:31.219 SO libspdk_bdev_virtio.so.6.0 00:04:31.219 SO libspdk_bdev_lvol.so.6.0 00:04:31.219 SYMLINK libspdk_bdev_virtio.so 00:04:31.219 SYMLINK libspdk_bdev_lvol.so 00:04:31.786 LIB libspdk_bdev_raid.a 00:04:31.786 SO libspdk_bdev_raid.so.6.0 00:04:31.786 SYMLINK libspdk_bdev_raid.so 00:04:33.169 LIB libspdk_bdev_nvme.a 00:04:33.169 SO libspdk_bdev_nvme.so.7.1 00:04:33.169 SYMLINK libspdk_bdev_nvme.so 00:04:33.739 CC module/event/subsystems/keyring/keyring.o 00:04:33.739 CC module/event/subsystems/iobuf/iobuf.o 00:04:33.739 CC module/event/subsystems/scheduler/scheduler.o 00:04:33.739 CC module/event/subsystems/fsdev/fsdev.o 00:04:33.739 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:33.739 CC module/event/subsystems/sock/sock.o 00:04:33.739 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:33.739 CC module/event/subsystems/vmd/vmd.o 00:04:33.739 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:33.739 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:33.739 LIB libspdk_event_keyring.a 00:04:33.739 LIB libspdk_event_vhost_blk.a 00:04:33.739 LIB libspdk_event_fsdev.a 00:04:33.739 LIB libspdk_event_scheduler.a 00:04:33.739 LIB libspdk_event_sock.a 00:04:33.739 LIB libspdk_event_vmd.a 00:04:33.739 LIB libspdk_event_vfu_tgt.a 00:04:33.739 SO libspdk_event_keyring.so.1.0 00:04:33.739 SO libspdk_event_vhost_blk.so.3.0 00:04:33.739 SO libspdk_event_scheduler.so.4.0 00:04:33.739 SO libspdk_event_fsdev.so.1.0 00:04:33.739 LIB libspdk_event_iobuf.a 00:04:33.739 SO libspdk_event_sock.so.5.0 00:04:33.739 SO libspdk_event_vfu_tgt.so.3.0 00:04:33.739 SO libspdk_event_vmd.so.6.0 00:04:33.739 SO libspdk_event_iobuf.so.3.0 00:04:33.739 SYMLINK libspdk_event_keyring.so 00:04:33.739 SYMLINK libspdk_event_vhost_blk.so 00:04:33.739 SYMLINK libspdk_event_fsdev.so 00:04:33.739 SYMLINK libspdk_event_scheduler.so 00:04:33.739 SYMLINK libspdk_event_sock.so 00:04:33.739 SYMLINK libspdk_event_vfu_tgt.so 00:04:33.739 SYMLINK libspdk_event_vmd.so 00:04:33.739 SYMLINK libspdk_event_iobuf.so 00:04:33.998 CC module/event/subsystems/accel/accel.o 00:04:34.257 LIB libspdk_event_accel.a 00:04:34.257 SO libspdk_event_accel.so.6.0 00:04:34.257 SYMLINK libspdk_event_accel.so 00:04:34.517 CC module/event/subsystems/bdev/bdev.o 00:04:34.517 LIB libspdk_event_bdev.a 00:04:34.776 SO libspdk_event_bdev.so.6.0 00:04:34.777 SYMLINK libspdk_event_bdev.so 00:04:34.777 CC module/event/subsystems/ublk/ublk.o 00:04:34.777 CC module/event/subsystems/nbd/nbd.o 00:04:34.777 CC module/event/subsystems/scsi/scsi.o 00:04:34.777 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:34.777 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:35.035 LIB libspdk_event_ublk.a 00:04:35.035 LIB libspdk_event_nbd.a 00:04:35.035 LIB libspdk_event_scsi.a 00:04:35.035 SO libspdk_event_ublk.so.3.0 00:04:35.035 SO libspdk_event_nbd.so.6.0 00:04:35.035 SO libspdk_event_scsi.so.6.0 00:04:35.035 SYMLINK libspdk_event_ublk.so 00:04:35.035 SYMLINK libspdk_event_nbd.so 00:04:35.035 SYMLINK libspdk_event_scsi.so 00:04:35.035 LIB libspdk_event_nvmf.a 00:04:35.035 SO libspdk_event_nvmf.so.6.0 00:04:35.294 SYMLINK libspdk_event_nvmf.so 00:04:35.294 CC module/event/subsystems/iscsi/iscsi.o 00:04:35.294 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:35.294 LIB libspdk_event_vhost_scsi.a 00:04:35.552 SO libspdk_event_vhost_scsi.so.3.0 00:04:35.552 LIB libspdk_event_iscsi.a 00:04:35.552 SO libspdk_event_iscsi.so.6.0 00:04:35.552 SYMLINK libspdk_event_vhost_scsi.so 00:04:35.552 SYMLINK libspdk_event_iscsi.so 00:04:35.552 SO libspdk.so.6.0 00:04:35.552 SYMLINK libspdk.so 00:04:35.821 CXX app/trace/trace.o 00:04:35.821 CC app/trace_record/trace_record.o 00:04:35.821 CC app/spdk_lspci/spdk_lspci.o 00:04:35.821 CC app/spdk_nvme_perf/perf.o 00:04:35.821 TEST_HEADER include/spdk/accel.h 00:04:35.821 TEST_HEADER include/spdk/assert.h 00:04:35.821 TEST_HEADER include/spdk/accel_module.h 00:04:35.821 TEST_HEADER include/spdk/base64.h 00:04:35.821 CC app/spdk_top/spdk_top.o 00:04:35.821 TEST_HEADER include/spdk/barrier.h 00:04:35.821 CC app/spdk_nvme_discover/discovery_aer.o 00:04:35.821 CC test/rpc_client/rpc_client_test.o 00:04:35.821 TEST_HEADER include/spdk/bdev.h 00:04:35.821 CC app/spdk_nvme_identify/identify.o 00:04:35.821 TEST_HEADER include/spdk/bdev_module.h 00:04:35.821 TEST_HEADER include/spdk/bdev_zone.h 00:04:35.821 TEST_HEADER include/spdk/bit_array.h 00:04:35.821 TEST_HEADER include/spdk/bit_pool.h 00:04:35.821 TEST_HEADER include/spdk/blob_bdev.h 00:04:35.821 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:35.821 TEST_HEADER include/spdk/blobfs.h 00:04:35.821 TEST_HEADER include/spdk/blob.h 00:04:35.821 TEST_HEADER include/spdk/conf.h 00:04:35.821 TEST_HEADER include/spdk/config.h 00:04:35.821 TEST_HEADER include/spdk/cpuset.h 00:04:35.821 TEST_HEADER include/spdk/crc16.h 00:04:35.821 TEST_HEADER include/spdk/crc32.h 00:04:35.821 TEST_HEADER include/spdk/crc64.h 00:04:35.821 TEST_HEADER include/spdk/dif.h 00:04:35.821 TEST_HEADER include/spdk/dma.h 00:04:35.821 TEST_HEADER include/spdk/endian.h 00:04:35.821 TEST_HEADER include/spdk/env_dpdk.h 00:04:35.821 TEST_HEADER include/spdk/env.h 00:04:35.821 TEST_HEADER include/spdk/event.h 00:04:35.821 TEST_HEADER include/spdk/fd_group.h 00:04:35.821 TEST_HEADER include/spdk/fd.h 00:04:35.821 TEST_HEADER include/spdk/file.h 00:04:35.821 TEST_HEADER include/spdk/fsdev.h 00:04:35.821 TEST_HEADER include/spdk/fsdev_module.h 00:04:35.821 TEST_HEADER include/spdk/ftl.h 00:04:35.821 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:35.821 TEST_HEADER include/spdk/gpt_spec.h 00:04:35.821 TEST_HEADER include/spdk/hexlify.h 00:04:35.821 TEST_HEADER include/spdk/histogram_data.h 00:04:35.821 TEST_HEADER include/spdk/idxd.h 00:04:35.821 TEST_HEADER include/spdk/idxd_spec.h 00:04:35.821 TEST_HEADER include/spdk/init.h 00:04:35.821 TEST_HEADER include/spdk/ioat.h 00:04:35.821 TEST_HEADER include/spdk/ioat_spec.h 00:04:35.821 TEST_HEADER include/spdk/iscsi_spec.h 00:04:35.821 TEST_HEADER include/spdk/json.h 00:04:35.821 TEST_HEADER include/spdk/jsonrpc.h 00:04:35.821 TEST_HEADER include/spdk/keyring.h 00:04:35.821 TEST_HEADER include/spdk/keyring_module.h 00:04:35.821 TEST_HEADER include/spdk/likely.h 00:04:35.821 TEST_HEADER include/spdk/log.h 00:04:35.821 TEST_HEADER include/spdk/lvol.h 00:04:35.821 TEST_HEADER include/spdk/md5.h 00:04:35.821 TEST_HEADER include/spdk/memory.h 00:04:35.821 TEST_HEADER include/spdk/mmio.h 00:04:35.821 TEST_HEADER include/spdk/nbd.h 00:04:35.821 TEST_HEADER include/spdk/net.h 00:04:35.821 TEST_HEADER include/spdk/nvme.h 00:04:35.821 TEST_HEADER include/spdk/notify.h 00:04:35.821 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:35.821 TEST_HEADER include/spdk/nvme_intel.h 00:04:35.821 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:35.821 TEST_HEADER include/spdk/nvme_spec.h 00:04:35.821 TEST_HEADER include/spdk/nvme_zns.h 00:04:35.821 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:35.821 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:35.821 TEST_HEADER include/spdk/nvmf.h 00:04:35.821 TEST_HEADER include/spdk/nvmf_spec.h 00:04:35.821 TEST_HEADER include/spdk/nvmf_transport.h 00:04:35.821 TEST_HEADER include/spdk/opal.h 00:04:35.821 TEST_HEADER include/spdk/opal_spec.h 00:04:35.821 TEST_HEADER include/spdk/pci_ids.h 00:04:35.821 TEST_HEADER include/spdk/pipe.h 00:04:35.821 TEST_HEADER include/spdk/queue.h 00:04:35.821 TEST_HEADER include/spdk/reduce.h 00:04:35.821 TEST_HEADER include/spdk/rpc.h 00:04:35.821 TEST_HEADER include/spdk/scheduler.h 00:04:35.821 TEST_HEADER include/spdk/scsi.h 00:04:35.821 TEST_HEADER include/spdk/scsi_spec.h 00:04:35.821 TEST_HEADER include/spdk/sock.h 00:04:35.821 TEST_HEADER include/spdk/stdinc.h 00:04:35.821 TEST_HEADER include/spdk/string.h 00:04:35.821 TEST_HEADER include/spdk/thread.h 00:04:35.821 TEST_HEADER include/spdk/trace.h 00:04:35.821 TEST_HEADER include/spdk/tree.h 00:04:35.821 TEST_HEADER include/spdk/trace_parser.h 00:04:35.821 TEST_HEADER include/spdk/ublk.h 00:04:35.821 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:35.821 TEST_HEADER include/spdk/util.h 00:04:35.821 TEST_HEADER include/spdk/version.h 00:04:35.821 TEST_HEADER include/spdk/uuid.h 00:04:35.821 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:35.821 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:35.821 TEST_HEADER include/spdk/vhost.h 00:04:35.821 TEST_HEADER include/spdk/vmd.h 00:04:35.821 TEST_HEADER include/spdk/xor.h 00:04:35.821 TEST_HEADER include/spdk/zipf.h 00:04:35.821 CXX test/cpp_headers/accel.o 00:04:35.821 CXX test/cpp_headers/accel_module.o 00:04:35.821 CXX test/cpp_headers/assert.o 00:04:35.821 CXX test/cpp_headers/barrier.o 00:04:35.821 CXX test/cpp_headers/base64.o 00:04:35.821 CXX test/cpp_headers/bdev.o 00:04:35.821 CXX test/cpp_headers/bdev_module.o 00:04:35.821 CXX test/cpp_headers/bdev_zone.o 00:04:35.821 CC app/spdk_dd/spdk_dd.o 00:04:35.821 CXX test/cpp_headers/bit_array.o 00:04:35.821 CXX test/cpp_headers/bit_pool.o 00:04:35.821 CXX test/cpp_headers/blob_bdev.o 00:04:35.821 CXX test/cpp_headers/blobfs_bdev.o 00:04:35.821 CXX test/cpp_headers/blobfs.o 00:04:35.821 CXX test/cpp_headers/blob.o 00:04:35.821 CXX test/cpp_headers/conf.o 00:04:35.821 CXX test/cpp_headers/config.o 00:04:35.821 CXX test/cpp_headers/cpuset.o 00:04:35.821 CXX test/cpp_headers/crc16.o 00:04:35.821 CC app/iscsi_tgt/iscsi_tgt.o 00:04:35.821 CC app/nvmf_tgt/nvmf_main.o 00:04:35.821 CXX test/cpp_headers/crc32.o 00:04:35.821 CC examples/ioat/perf/perf.o 00:04:35.821 CC test/thread/poller_perf/poller_perf.o 00:04:35.821 CC test/app/histogram_perf/histogram_perf.o 00:04:35.821 CC test/env/vtophys/vtophys.o 00:04:35.822 CC examples/ioat/verify/verify.o 00:04:35.822 CC examples/util/zipf/zipf.o 00:04:35.822 CC test/app/stub/stub.o 00:04:35.822 CC app/spdk_tgt/spdk_tgt.o 00:04:35.822 CC test/app/jsoncat/jsoncat.o 00:04:36.082 CC app/fio/nvme/fio_plugin.o 00:04:36.082 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:36.082 CC test/env/memory/memory_ut.o 00:04:36.082 CC test/env/pci/pci_ut.o 00:04:36.082 CC app/fio/bdev/fio_plugin.o 00:04:36.082 CC test/dma/test_dma/test_dma.o 00:04:36.082 CC test/app/bdev_svc/bdev_svc.o 00:04:36.082 LINK spdk_lspci 00:04:36.082 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:36.082 CC test/env/mem_callbacks/mem_callbacks.o 00:04:36.350 LINK rpc_client_test 00:04:36.350 LINK spdk_nvme_discover 00:04:36.350 LINK poller_perf 00:04:36.350 LINK jsoncat 00:04:36.350 LINK histogram_perf 00:04:36.350 LINK vtophys 00:04:36.350 LINK zipf 00:04:36.350 LINK interrupt_tgt 00:04:36.350 CXX test/cpp_headers/crc64.o 00:04:36.350 CXX test/cpp_headers/dif.o 00:04:36.350 CXX test/cpp_headers/dma.o 00:04:36.350 CXX test/cpp_headers/endian.o 00:04:36.350 LINK nvmf_tgt 00:04:36.350 LINK stub 00:04:36.350 CXX test/cpp_headers/env_dpdk.o 00:04:36.350 CXX test/cpp_headers/env.o 00:04:36.350 LINK env_dpdk_post_init 00:04:36.350 LINK spdk_trace_record 00:04:36.350 CXX test/cpp_headers/event.o 00:04:36.350 CXX test/cpp_headers/fd_group.o 00:04:36.350 CXX test/cpp_headers/fd.o 00:04:36.350 CXX test/cpp_headers/file.o 00:04:36.350 CXX test/cpp_headers/fsdev.o 00:04:36.350 LINK iscsi_tgt 00:04:36.350 CXX test/cpp_headers/fsdev_module.o 00:04:36.350 CXX test/cpp_headers/ftl.o 00:04:36.350 CXX test/cpp_headers/fuse_dispatcher.o 00:04:36.350 CXX test/cpp_headers/gpt_spec.o 00:04:36.350 LINK verify 00:04:36.350 CXX test/cpp_headers/hexlify.o 00:04:36.350 LINK ioat_perf 00:04:36.350 LINK bdev_svc 00:04:36.350 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:36.350 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:36.614 LINK spdk_tgt 00:04:36.614 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:36.614 CXX test/cpp_headers/histogram_data.o 00:04:36.614 CXX test/cpp_headers/idxd.o 00:04:36.614 CXX test/cpp_headers/idxd_spec.o 00:04:36.614 CXX test/cpp_headers/init.o 00:04:36.614 CXX test/cpp_headers/ioat.o 00:04:36.614 CXX test/cpp_headers/ioat_spec.o 00:04:36.614 LINK spdk_dd 00:04:36.614 CXX test/cpp_headers/iscsi_spec.o 00:04:36.614 CXX test/cpp_headers/json.o 00:04:36.614 CXX test/cpp_headers/jsonrpc.o 00:04:36.887 CXX test/cpp_headers/keyring.o 00:04:36.887 CXX test/cpp_headers/keyring_module.o 00:04:36.887 CXX test/cpp_headers/likely.o 00:04:36.887 CXX test/cpp_headers/log.o 00:04:36.887 CXX test/cpp_headers/lvol.o 00:04:36.887 CXX test/cpp_headers/md5.o 00:04:36.887 CXX test/cpp_headers/memory.o 00:04:36.887 CXX test/cpp_headers/mmio.o 00:04:36.887 CXX test/cpp_headers/nbd.o 00:04:36.887 CXX test/cpp_headers/net.o 00:04:36.887 LINK pci_ut 00:04:36.887 LINK spdk_trace 00:04:36.887 CXX test/cpp_headers/notify.o 00:04:36.887 CXX test/cpp_headers/nvme.o 00:04:36.887 CXX test/cpp_headers/nvme_intel.o 00:04:36.887 CXX test/cpp_headers/nvme_ocssd.o 00:04:36.887 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:36.887 CXX test/cpp_headers/nvme_spec.o 00:04:36.887 CC test/event/event_perf/event_perf.o 00:04:36.887 CC test/event/reactor/reactor.o 00:04:36.887 CXX test/cpp_headers/nvme_zns.o 00:04:36.887 CC test/event/reactor_perf/reactor_perf.o 00:04:36.887 LINK nvme_fuzz 00:04:36.887 CXX test/cpp_headers/nvmf_cmd.o 00:04:36.887 CC test/event/app_repeat/app_repeat.o 00:04:36.887 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:37.155 CC examples/sock/hello_world/hello_sock.o 00:04:37.155 CXX test/cpp_headers/nvmf.o 00:04:37.155 CXX test/cpp_headers/nvmf_spec.o 00:04:37.155 CC examples/thread/thread/thread_ex.o 00:04:37.155 CXX test/cpp_headers/nvmf_transport.o 00:04:37.155 CC examples/vmd/lsvmd/lsvmd.o 00:04:37.155 CC test/event/scheduler/scheduler.o 00:04:37.155 CC examples/idxd/perf/perf.o 00:04:37.155 LINK test_dma 00:04:37.155 CXX test/cpp_headers/opal.o 00:04:37.155 CC examples/vmd/led/led.o 00:04:37.155 CXX test/cpp_headers/opal_spec.o 00:04:37.155 CXX test/cpp_headers/pci_ids.o 00:04:37.155 CXX test/cpp_headers/pipe.o 00:04:37.155 CXX test/cpp_headers/queue.o 00:04:37.155 CXX test/cpp_headers/reduce.o 00:04:37.155 CXX test/cpp_headers/rpc.o 00:04:37.155 CXX test/cpp_headers/scheduler.o 00:04:37.155 CXX test/cpp_headers/scsi.o 00:04:37.155 CXX test/cpp_headers/scsi_spec.o 00:04:37.155 CXX test/cpp_headers/sock.o 00:04:37.155 CXX test/cpp_headers/stdinc.o 00:04:37.155 CXX test/cpp_headers/string.o 00:04:37.155 CXX test/cpp_headers/thread.o 00:04:37.155 CXX test/cpp_headers/trace.o 00:04:37.155 LINK reactor 00:04:37.423 LINK event_perf 00:04:37.423 LINK reactor_perf 00:04:37.423 LINK spdk_bdev 00:04:37.423 CXX test/cpp_headers/trace_parser.o 00:04:37.423 CXX test/cpp_headers/tree.o 00:04:37.423 CXX test/cpp_headers/ublk.o 00:04:37.423 LINK vhost_fuzz 00:04:37.423 CXX test/cpp_headers/util.o 00:04:37.423 CXX test/cpp_headers/uuid.o 00:04:37.423 CXX test/cpp_headers/version.o 00:04:37.423 LINK app_repeat 00:04:37.423 CXX test/cpp_headers/vfio_user_pci.o 00:04:37.423 LINK spdk_nvme 00:04:37.423 LINK lsvmd 00:04:37.423 CXX test/cpp_headers/vfio_user_spec.o 00:04:37.423 CXX test/cpp_headers/vhost.o 00:04:37.423 LINK mem_callbacks 00:04:37.423 LINK spdk_nvme_perf 00:04:37.423 CXX test/cpp_headers/vmd.o 00:04:37.423 CC app/vhost/vhost.o 00:04:37.423 CXX test/cpp_headers/xor.o 00:04:37.423 CXX test/cpp_headers/zipf.o 00:04:37.423 LINK led 00:04:37.423 LINK hello_sock 00:04:37.688 LINK spdk_nvme_identify 00:04:37.688 LINK scheduler 00:04:37.688 LINK thread 00:04:37.688 LINK spdk_top 00:04:37.688 LINK idxd_perf 00:04:37.688 CC test/nvme/boot_partition/boot_partition.o 00:04:37.688 CC test/nvme/e2edp/nvme_dp.o 00:04:37.688 CC test/nvme/overhead/overhead.o 00:04:37.688 CC test/nvme/aer/aer.o 00:04:37.688 CC test/nvme/connect_stress/connect_stress.o 00:04:37.688 CC test/nvme/reserve/reserve.o 00:04:37.688 CC test/nvme/compliance/nvme_compliance.o 00:04:37.688 CC test/nvme/sgl/sgl.o 00:04:37.688 CC test/nvme/err_injection/err_injection.o 00:04:37.688 CC test/nvme/startup/startup.o 00:04:37.688 CC test/nvme/fused_ordering/fused_ordering.o 00:04:37.688 CC test/nvme/fdp/fdp.o 00:04:37.688 CC test/nvme/simple_copy/simple_copy.o 00:04:37.688 CC test/nvme/cuse/cuse.o 00:04:37.688 LINK vhost 00:04:37.688 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:37.688 CC test/nvme/reset/reset.o 00:04:37.688 CC test/accel/dif/dif.o 00:04:37.688 CC test/blobfs/mkfs/mkfs.o 00:04:37.948 CC test/lvol/esnap/esnap.o 00:04:37.948 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:37.948 CC examples/nvme/arbitration/arbitration.o 00:04:37.948 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:37.948 CC examples/nvme/hotplug/hotplug.o 00:04:37.948 CC examples/nvme/hello_world/hello_world.o 00:04:37.948 CC examples/nvme/reconnect/reconnect.o 00:04:37.948 CC examples/nvme/abort/abort.o 00:04:37.948 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:37.948 LINK startup 00:04:37.948 LINK err_injection 00:04:37.948 LINK connect_stress 00:04:37.948 LINK fused_ordering 00:04:37.948 LINK doorbell_aers 00:04:38.209 CC examples/accel/perf/accel_perf.o 00:04:38.209 LINK boot_partition 00:04:38.209 LINK simple_copy 00:04:38.209 LINK reset 00:04:38.209 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:38.209 LINK aer 00:04:38.209 CC examples/blob/cli/blobcli.o 00:04:38.209 CC examples/blob/hello_world/hello_blob.o 00:04:38.209 LINK reserve 00:04:38.209 LINK sgl 00:04:38.209 LINK mkfs 00:04:38.209 LINK nvme_dp 00:04:38.209 LINK overhead 00:04:38.209 LINK nvme_compliance 00:04:38.209 LINK cmb_copy 00:04:38.209 LINK fdp 00:04:38.468 LINK pmr_persistence 00:04:38.468 LINK memory_ut 00:04:38.468 LINK hello_world 00:04:38.468 LINK arbitration 00:04:38.468 LINK hotplug 00:04:38.468 LINK reconnect 00:04:38.468 LINK hello_fsdev 00:04:38.468 LINK abort 00:04:38.728 LINK nvme_manage 00:04:38.728 LINK hello_blob 00:04:38.728 LINK blobcli 00:04:38.728 LINK dif 00:04:38.728 LINK accel_perf 00:04:38.987 LINK iscsi_fuzz 00:04:39.247 CC examples/bdev/hello_world/hello_bdev.o 00:04:39.247 CC examples/bdev/bdevperf/bdevperf.o 00:04:39.247 CC test/bdev/bdevio/bdevio.o 00:04:39.507 LINK hello_bdev 00:04:39.507 LINK cuse 00:04:39.507 LINK bdevio 00:04:40.075 LINK bdevperf 00:04:40.333 CC examples/nvmf/nvmf/nvmf.o 00:04:40.591 LINK nvmf 00:04:43.126 LINK esnap 00:04:43.384 00:04:43.384 real 1m10.330s 00:04:43.384 user 11m55.713s 00:04:43.384 sys 2m39.683s 00:04:43.384 19:03:28 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:43.384 19:03:28 make -- common/autotest_common.sh@10 -- $ set +x 00:04:43.384 ************************************ 00:04:43.384 END TEST make 00:04:43.384 ************************************ 00:04:43.384 19:03:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:43.384 19:03:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:43.384 19:03:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:43.384 19:03:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.384 19:03:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:43.384 19:03:28 -- pm/common@44 -- $ pid=13677 00:04:43.384 19:03:28 -- pm/common@50 -- $ kill -TERM 13677 00:04:43.384 19:03:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.385 19:03:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:43.385 19:03:28 -- pm/common@44 -- $ pid=13679 00:04:43.385 19:03:28 -- pm/common@50 -- $ kill -TERM 13679 00:04:43.385 19:03:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.385 19:03:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:43.385 19:03:28 -- pm/common@44 -- $ pid=13681 00:04:43.385 19:03:28 -- pm/common@50 -- $ kill -TERM 13681 00:04:43.385 19:03:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.385 19:03:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:43.385 19:03:28 -- pm/common@44 -- $ pid=13709 00:04:43.385 19:03:28 -- pm/common@50 -- $ sudo -E kill -TERM 13709 00:04:43.385 19:03:28 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:43.385 19:03:28 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:43.644 19:03:28 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:43.644 19:03:28 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:43.644 19:03:28 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:43.644 19:03:28 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:43.644 19:03:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.644 19:03:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.644 19:03:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.644 19:03:28 -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.644 19:03:28 -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.644 19:03:28 -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.644 19:03:28 -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.644 19:03:28 -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.644 19:03:28 -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.644 19:03:28 -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.644 19:03:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.644 19:03:28 -- scripts/common.sh@344 -- # case "$op" in 00:04:43.644 19:03:28 -- scripts/common.sh@345 -- # : 1 00:04:43.644 19:03:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.644 19:03:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.644 19:03:28 -- scripts/common.sh@365 -- # decimal 1 00:04:43.644 19:03:28 -- scripts/common.sh@353 -- # local d=1 00:04:43.644 19:03:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.644 19:03:28 -- scripts/common.sh@355 -- # echo 1 00:04:43.644 19:03:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.645 19:03:28 -- scripts/common.sh@366 -- # decimal 2 00:04:43.645 19:03:28 -- scripts/common.sh@353 -- # local d=2 00:04:43.645 19:03:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.645 19:03:28 -- scripts/common.sh@355 -- # echo 2 00:04:43.645 19:03:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.645 19:03:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.645 19:03:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.645 19:03:28 -- scripts/common.sh@368 -- # return 0 00:04:43.645 19:03:28 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.645 19:03:28 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:43.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.645 --rc genhtml_branch_coverage=1 00:04:43.645 --rc genhtml_function_coverage=1 00:04:43.645 --rc genhtml_legend=1 00:04:43.645 --rc geninfo_all_blocks=1 00:04:43.645 --rc geninfo_unexecuted_blocks=1 00:04:43.645 00:04:43.645 ' 00:04:43.645 19:03:28 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:43.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.645 --rc genhtml_branch_coverage=1 00:04:43.645 --rc genhtml_function_coverage=1 00:04:43.645 --rc genhtml_legend=1 00:04:43.645 --rc geninfo_all_blocks=1 00:04:43.645 --rc geninfo_unexecuted_blocks=1 00:04:43.645 00:04:43.645 ' 00:04:43.645 19:03:28 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:43.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.645 --rc genhtml_branch_coverage=1 00:04:43.645 --rc genhtml_function_coverage=1 00:04:43.645 --rc genhtml_legend=1 00:04:43.645 --rc geninfo_all_blocks=1 00:04:43.645 --rc geninfo_unexecuted_blocks=1 00:04:43.645 00:04:43.645 ' 00:04:43.645 19:03:28 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:43.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.645 --rc genhtml_branch_coverage=1 00:04:43.645 --rc genhtml_function_coverage=1 00:04:43.645 --rc genhtml_legend=1 00:04:43.645 --rc geninfo_all_blocks=1 00:04:43.645 --rc geninfo_unexecuted_blocks=1 00:04:43.645 00:04:43.645 ' 00:04:43.645 19:03:28 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:43.645 19:03:28 -- nvmf/common.sh@7 -- # uname -s 00:04:43.645 19:03:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.645 19:03:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.645 19:03:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.645 19:03:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.645 19:03:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.645 19:03:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.645 19:03:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.645 19:03:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.645 19:03:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.645 19:03:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.645 19:03:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:04:43.645 19:03:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:04:43.645 19:03:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.645 19:03:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.645 19:03:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:43.645 19:03:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:43.645 19:03:28 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:43.645 19:03:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:43.645 19:03:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.645 19:03:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.645 19:03:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.645 19:03:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.645 19:03:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.645 19:03:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.645 19:03:28 -- paths/export.sh@5 -- # export PATH 00:04:43.645 19:03:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.645 19:03:28 -- nvmf/common.sh@51 -- # : 0 00:04:43.645 19:03:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:43.645 19:03:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:43.645 19:03:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:43.645 19:03:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.645 19:03:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.645 19:03:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:43.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:43.645 19:03:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:43.645 19:03:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:43.645 19:03:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:43.645 19:03:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:43.645 19:03:28 -- spdk/autotest.sh@32 -- # uname -s 00:04:43.645 19:03:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:43.645 19:03:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:43.645 19:03:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:43.645 19:03:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:43.645 19:03:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:43.645 19:03:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:43.645 19:03:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:43.645 19:03:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:43.645 19:03:28 -- spdk/autotest.sh@48 -- # udevadm_pid=74397 00:04:43.645 19:03:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:43.645 19:03:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:43.645 19:03:28 -- pm/common@17 -- # local monitor 00:04:43.645 19:03:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.645 19:03:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.645 19:03:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.645 19:03:28 -- pm/common@21 -- # date +%s 00:04:43.645 19:03:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.645 19:03:28 -- pm/common@21 -- # date +%s 00:04:43.645 19:03:28 -- pm/common@25 -- # sleep 1 00:04:43.645 19:03:28 -- pm/common@21 -- # date +%s 00:04:43.645 19:03:28 -- pm/common@21 -- # date +%s 00:04:43.645 19:03:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733508208 00:04:43.645 19:03:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733508208 00:04:43.645 19:03:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733508208 00:04:43.645 19:03:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733508208 00:04:43.645 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733508208_collect-vmstat.pm.log 00:04:43.645 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733508208_collect-cpu-load.pm.log 00:04:43.645 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733508208_collect-cpu-temp.pm.log 00:04:43.905 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733508208_collect-bmc-pm.bmc.pm.log 00:04:44.845 19:03:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:44.845 19:03:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:44.845 19:03:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.845 19:03:29 -- common/autotest_common.sh@10 -- # set +x 00:04:44.845 19:03:29 -- spdk/autotest.sh@59 -- # create_test_list 00:04:44.845 19:03:29 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:44.845 19:03:29 -- common/autotest_common.sh@10 -- # set +x 00:04:44.845 19:03:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:44.845 19:03:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:44.845 19:03:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:44.845 19:03:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:44.845 19:03:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:44.845 19:03:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:44.845 19:03:29 -- common/autotest_common.sh@1457 -- # uname 00:04:44.845 19:03:29 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:44.845 19:03:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:44.845 19:03:29 -- common/autotest_common.sh@1477 -- # uname 00:04:44.845 19:03:29 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:44.845 19:03:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:44.845 19:03:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:44.845 lcov: LCOV version 1.15 00:04:44.845 19:03:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:06.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:06.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:24.862 19:04:07 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:24.862 19:04:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.862 19:04:07 -- common/autotest_common.sh@10 -- # set +x 00:05:24.862 19:04:07 -- spdk/autotest.sh@78 -- # rm -f 00:05:24.862 19:04:07 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:24.862 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:05:24.862 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:24.862 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:24.862 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:24.862 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:24.862 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:24.862 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:24.862 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:24.862 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:24.862 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:24.862 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:24.862 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:24.862 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:24.862 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:24.862 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:24.862 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:24.862 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:24.862 19:04:09 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:24.862 19:04:09 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:24.862 19:04:09 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:24.862 19:04:09 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:24.862 19:04:09 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:24.863 19:04:09 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:24.863 19:04:09 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:24.863 19:04:09 -- common/autotest_common.sh@1669 -- # bdf=0000:82:00.0 00:05:24.863 19:04:09 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:24.863 19:04:09 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:24.863 19:04:09 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:24.863 19:04:09 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:24.863 19:04:09 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:24.863 19:04:09 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:24.863 19:04:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:24.863 19:04:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:24.863 19:04:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:24.863 19:04:09 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:24.863 19:04:09 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:24.863 No valid GPT data, bailing 00:05:24.863 19:04:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:24.863 19:04:09 -- scripts/common.sh@394 -- # pt= 00:05:24.863 19:04:09 -- scripts/common.sh@395 -- # return 1 00:05:24.863 19:04:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:24.863 1+0 records in 00:05:24.863 1+0 records out 00:05:24.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00468347 s, 224 MB/s 00:05:24.863 19:04:09 -- spdk/autotest.sh@105 -- # sync 00:05:24.863 19:04:09 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:24.863 19:04:09 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:24.863 19:04:09 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:26.766 19:04:11 -- spdk/autotest.sh@111 -- # uname -s 00:05:26.766 19:04:11 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:26.766 19:04:11 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:26.766 19:04:11 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:27.702 Hugepages 00:05:27.702 node hugesize free / total 00:05:27.702 node0 1048576kB 0 / 0 00:05:27.702 node0 2048kB 0 / 0 00:05:27.702 node1 1048576kB 0 / 0 00:05:27.702 node1 2048kB 0 / 0 00:05:27.702 00:05:27.702 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:27.702 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:27.702 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:27.702 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:27.703 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:27.703 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:27.703 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:27.703 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:27.703 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:27.703 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:27.703 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:27.703 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:27.703 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:27.703 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:27.703 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:27.703 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:27.703 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:27.703 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:27.703 19:04:12 -- spdk/autotest.sh@117 -- # uname -s 00:05:27.703 19:04:12 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:27.703 19:04:12 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:27.703 19:04:12 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:29.080 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:29.080 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:29.080 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:29.080 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:29.080 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:29.080 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:29.080 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:29.080 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:29.080 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:29.080 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:29.080 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:29.080 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:29.080 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:29.080 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:29.080 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:29.080 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:30.020 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:30.280 19:04:15 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:31.223 19:04:16 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:31.223 19:04:16 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:31.223 19:04:16 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:31.223 19:04:16 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:31.223 19:04:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:31.223 19:04:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:31.223 19:04:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:31.223 19:04:16 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:31.223 19:04:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:31.223 19:04:16 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:31.223 19:04:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:82:00.0 00:05:31.223 19:04:16 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:32.609 Waiting for block devices as requested 00:05:32.609 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:05:32.609 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:32.609 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:32.870 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:32.870 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:32.870 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:32.870 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:33.132 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:33.132 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:33.132 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:33.392 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:33.392 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:33.392 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:33.392 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:33.669 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:33.669 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:33.669 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:33.929 19:04:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:33.929 19:04:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:05:33.929 19:04:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:33.929 19:04:18 -- common/autotest_common.sh@1487 -- # grep 0000:82:00.0/nvme/nvme 00:05:33.929 19:04:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:33.929 19:04:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:05:33.929 19:04:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:33.929 19:04:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:33.929 19:04:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:33.929 19:04:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:33.929 19:04:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:33.929 19:04:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:33.929 19:04:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:33.929 19:04:18 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:05:33.929 19:04:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:33.929 19:04:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:33.929 19:04:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:33.929 19:04:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:33.929 19:04:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:33.929 19:04:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:33.929 19:04:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:33.929 19:04:18 -- common/autotest_common.sh@1543 -- # continue 00:05:33.929 19:04:18 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:33.929 19:04:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:33.929 19:04:18 -- common/autotest_common.sh@10 -- # set +x 00:05:33.929 19:04:18 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:33.929 19:04:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.929 19:04:18 -- common/autotest_common.sh@10 -- # set +x 00:05:33.929 19:04:18 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:35.312 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:35.313 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:35.313 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:35.313 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:35.313 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:35.313 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:35.313 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:35.313 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:35.313 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:35.313 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:35.313 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:35.313 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:35.313 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:35.313 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:35.313 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:35.313 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:36.253 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:36.253 19:04:21 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:36.253 19:04:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:36.253 19:04:21 -- common/autotest_common.sh@10 -- # set +x 00:05:36.253 19:04:21 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:36.253 19:04:21 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:36.253 19:04:21 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:36.253 19:04:21 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:36.253 19:04:21 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:36.253 19:04:21 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:36.253 19:04:21 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:36.253 19:04:21 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:36.253 19:04:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:36.253 19:04:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:36.253 19:04:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.253 19:04:21 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:36.253 19:04:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:36.253 19:04:21 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:36.253 19:04:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:82:00.0 00:05:36.253 19:04:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:36.253 19:04:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:05:36.253 19:04:21 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:36.253 19:04:21 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:36.253 19:04:21 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:36.253 19:04:21 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:36.253 19:04:21 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:82:00.0 00:05:36.253 19:04:21 -- common/autotest_common.sh@1579 -- # [[ -z 0000:82:00.0 ]] 00:05:36.253 19:04:21 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=84843 00:05:36.253 19:04:21 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.253 19:04:21 -- common/autotest_common.sh@1585 -- # waitforlisten 84843 00:05:36.253 19:04:21 -- common/autotest_common.sh@835 -- # '[' -z 84843 ']' 00:05:36.253 19:04:21 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.253 19:04:21 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.253 19:04:21 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.253 19:04:21 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.253 19:04:21 -- common/autotest_common.sh@10 -- # set +x 00:05:36.513 [2024-12-06 19:04:21.355209] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:05:36.513 [2024-12-06 19:04:21.355317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84843 ] 00:05:36.513 [2024-12-06 19:04:21.424470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.513 [2024-12-06 19:04:21.484053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.772 19:04:21 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.772 19:04:21 -- common/autotest_common.sh@868 -- # return 0 00:05:36.772 19:04:21 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:36.772 19:04:21 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:36.772 19:04:21 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:05:40.087 nvme0n1 00:05:40.087 19:04:24 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:40.087 [2024-12-06 19:04:25.103244] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:40.087 [2024-12-06 19:04:25.103290] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:40.087 request: 00:05:40.087 { 00:05:40.087 "nvme_ctrlr_name": "nvme0", 00:05:40.087 "password": "test", 00:05:40.087 "method": "bdev_nvme_opal_revert", 00:05:40.087 "req_id": 1 00:05:40.087 } 00:05:40.087 Got JSON-RPC error response 00:05:40.087 response: 00:05:40.087 { 00:05:40.087 "code": -32603, 00:05:40.087 "message": "Internal error" 00:05:40.087 } 00:05:40.087 19:04:25 -- common/autotest_common.sh@1591 -- # true 00:05:40.087 19:04:25 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:40.087 19:04:25 -- common/autotest_common.sh@1595 -- # killprocess 84843 00:05:40.087 19:04:25 -- common/autotest_common.sh@954 -- # '[' -z 84843 ']' 00:05:40.087 19:04:25 -- common/autotest_common.sh@958 -- # kill -0 84843 00:05:40.087 19:04:25 -- common/autotest_common.sh@959 -- # uname 00:05:40.087 19:04:25 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.087 19:04:25 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84843 00:05:40.344 19:04:25 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.344 19:04:25 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.344 19:04:25 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84843' 00:05:40.344 killing process with pid 84843 00:05:40.344 19:04:25 -- common/autotest_common.sh@973 -- # kill 84843 00:05:40.344 19:04:25 -- common/autotest_common.sh@978 -- # wait 84843 00:05:42.299 19:04:26 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:42.299 19:04:26 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:42.299 19:04:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:42.299 19:04:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:42.299 19:04:26 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:42.299 19:04:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.299 19:04:26 -- common/autotest_common.sh@10 -- # set +x 00:05:42.299 19:04:26 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:42.299 19:04:26 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:42.299 19:04:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.299 19:04:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.299 19:04:26 -- common/autotest_common.sh@10 -- # set +x 00:05:42.299 ************************************ 00:05:42.299 START TEST env 00:05:42.299 ************************************ 00:05:42.299 19:04:26 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:42.299 * Looking for test storage... 00:05:42.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:42.299 19:04:26 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:42.299 19:04:26 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:42.299 19:04:26 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:42.299 19:04:27 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:42.299 19:04:27 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.299 19:04:27 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.299 19:04:27 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.299 19:04:27 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.299 19:04:27 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.299 19:04:27 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.299 19:04:27 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.299 19:04:27 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.299 19:04:27 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.299 19:04:27 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.299 19:04:27 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.299 19:04:27 env -- scripts/common.sh@344 -- # case "$op" in 00:05:42.299 19:04:27 env -- scripts/common.sh@345 -- # : 1 00:05:42.299 19:04:27 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.299 19:04:27 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.299 19:04:27 env -- scripts/common.sh@365 -- # decimal 1 00:05:42.299 19:04:27 env -- scripts/common.sh@353 -- # local d=1 00:05:42.299 19:04:27 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.299 19:04:27 env -- scripts/common.sh@355 -- # echo 1 00:05:42.299 19:04:27 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.299 19:04:27 env -- scripts/common.sh@366 -- # decimal 2 00:05:42.299 19:04:27 env -- scripts/common.sh@353 -- # local d=2 00:05:42.299 19:04:27 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.299 19:04:27 env -- scripts/common.sh@355 -- # echo 2 00:05:42.299 19:04:27 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.299 19:04:27 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.299 19:04:27 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.299 19:04:27 env -- scripts/common.sh@368 -- # return 0 00:05:42.299 19:04:27 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.299 19:04:27 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:42.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.299 --rc genhtml_branch_coverage=1 00:05:42.299 --rc genhtml_function_coverage=1 00:05:42.299 --rc genhtml_legend=1 00:05:42.299 --rc geninfo_all_blocks=1 00:05:42.299 --rc geninfo_unexecuted_blocks=1 00:05:42.299 00:05:42.299 ' 00:05:42.299 19:04:27 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:42.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.299 --rc genhtml_branch_coverage=1 00:05:42.299 --rc genhtml_function_coverage=1 00:05:42.299 --rc genhtml_legend=1 00:05:42.299 --rc geninfo_all_blocks=1 00:05:42.299 --rc geninfo_unexecuted_blocks=1 00:05:42.299 00:05:42.299 ' 00:05:42.299 19:04:27 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:42.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.299 --rc genhtml_branch_coverage=1 00:05:42.299 --rc genhtml_function_coverage=1 00:05:42.299 --rc genhtml_legend=1 00:05:42.299 --rc geninfo_all_blocks=1 00:05:42.299 --rc geninfo_unexecuted_blocks=1 00:05:42.299 00:05:42.299 ' 00:05:42.299 19:04:27 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:42.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.299 --rc genhtml_branch_coverage=1 00:05:42.299 --rc genhtml_function_coverage=1 00:05:42.299 --rc genhtml_legend=1 00:05:42.299 --rc geninfo_all_blocks=1 00:05:42.299 --rc geninfo_unexecuted_blocks=1 00:05:42.299 00:05:42.299 ' 00:05:42.299 19:04:27 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:42.299 19:04:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.299 19:04:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.299 19:04:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.299 ************************************ 00:05:42.299 START TEST env_memory 00:05:42.299 ************************************ 00:05:42.299 19:04:27 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:42.299 00:05:42.299 00:05:42.299 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.299 http://cunit.sourceforge.net/ 00:05:42.299 00:05:42.299 00:05:42.299 Suite: memory 00:05:42.299 Test: alloc and free memory map ...[2024-12-06 19:04:27.119982] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:42.299 passed 00:05:42.299 Test: mem map translation ...[2024-12-06 19:04:27.139799] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:42.299 [2024-12-06 19:04:27.139820] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:42.299 [2024-12-06 19:04:27.139869] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:42.299 [2024-12-06 19:04:27.139881] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:42.299 passed 00:05:42.299 Test: mem map registration ...[2024-12-06 19:04:27.181050] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:42.299 [2024-12-06 19:04:27.181080] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:42.299 passed 00:05:42.299 Test: mem map adjacent registrations ...passed 00:05:42.299 00:05:42.299 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.299 suites 1 1 n/a 0 0 00:05:42.299 tests 4 4 4 0 0 00:05:42.299 asserts 152 152 152 0 n/a 00:05:42.299 00:05:42.299 Elapsed time = 0.137 seconds 00:05:42.299 00:05:42.299 real 0m0.146s 00:05:42.299 user 0m0.138s 00:05:42.299 sys 0m0.008s 00:05:42.299 19:04:27 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.299 19:04:27 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:42.299 ************************************ 00:05:42.299 END TEST env_memory 00:05:42.299 ************************************ 00:05:42.299 19:04:27 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:42.299 19:04:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.299 19:04:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.299 19:04:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.299 ************************************ 00:05:42.299 START TEST env_vtophys 00:05:42.299 ************************************ 00:05:42.299 19:04:27 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:42.299 EAL: lib.eal log level changed from notice to debug 00:05:42.299 EAL: Detected lcore 0 as core 0 on socket 0 00:05:42.299 EAL: Detected lcore 1 as core 1 on socket 0 00:05:42.299 EAL: Detected lcore 2 as core 2 on socket 0 00:05:42.299 EAL: Detected lcore 3 as core 3 on socket 0 00:05:42.299 EAL: Detected lcore 4 as core 4 on socket 0 00:05:42.299 EAL: Detected lcore 5 as core 5 on socket 0 00:05:42.299 EAL: Detected lcore 6 as core 8 on socket 0 00:05:42.299 EAL: Detected lcore 7 as core 9 on socket 0 00:05:42.300 EAL: Detected lcore 8 as core 10 on socket 0 00:05:42.300 EAL: Detected lcore 9 as core 11 on socket 0 00:05:42.300 EAL: Detected lcore 10 as core 12 on socket 0 00:05:42.300 EAL: Detected lcore 11 as core 13 on socket 0 00:05:42.300 EAL: Detected lcore 12 as core 0 on socket 1 00:05:42.300 EAL: Detected lcore 13 as core 1 on socket 1 00:05:42.300 EAL: Detected lcore 14 as core 2 on socket 1 00:05:42.300 EAL: Detected lcore 15 as core 3 on socket 1 00:05:42.300 EAL: Detected lcore 16 as core 4 on socket 1 00:05:42.300 EAL: Detected lcore 17 as core 5 on socket 1 00:05:42.300 EAL: Detected lcore 18 as core 8 on socket 1 00:05:42.300 EAL: Detected lcore 19 as core 9 on socket 1 00:05:42.300 EAL: Detected lcore 20 as core 10 on socket 1 00:05:42.300 EAL: Detected lcore 21 as core 11 on socket 1 00:05:42.300 EAL: Detected lcore 22 as core 12 on socket 1 00:05:42.300 EAL: Detected lcore 23 as core 13 on socket 1 00:05:42.300 EAL: Detected lcore 24 as core 0 on socket 0 00:05:42.300 EAL: Detected lcore 25 as core 1 on socket 0 00:05:42.300 EAL: Detected lcore 26 as core 2 on socket 0 00:05:42.300 EAL: Detected lcore 27 as core 3 on socket 0 00:05:42.300 EAL: Detected lcore 28 as core 4 on socket 0 00:05:42.300 EAL: Detected lcore 29 as core 5 on socket 0 00:05:42.300 EAL: Detected lcore 30 as core 8 on socket 0 00:05:42.300 EAL: Detected lcore 31 as core 9 on socket 0 00:05:42.300 EAL: Detected lcore 32 as core 10 on socket 0 00:05:42.300 EAL: Detected lcore 33 as core 11 on socket 0 00:05:42.300 EAL: Detected lcore 34 as core 12 on socket 0 00:05:42.300 EAL: Detected lcore 35 as core 13 on socket 0 00:05:42.300 EAL: Detected lcore 36 as core 0 on socket 1 00:05:42.300 EAL: Detected lcore 37 as core 1 on socket 1 00:05:42.300 EAL: Detected lcore 38 as core 2 on socket 1 00:05:42.300 EAL: Detected lcore 39 as core 3 on socket 1 00:05:42.300 EAL: Detected lcore 40 as core 4 on socket 1 00:05:42.300 EAL: Detected lcore 41 as core 5 on socket 1 00:05:42.300 EAL: Detected lcore 42 as core 8 on socket 1 00:05:42.300 EAL: Detected lcore 43 as core 9 on socket 1 00:05:42.300 EAL: Detected lcore 44 as core 10 on socket 1 00:05:42.300 EAL: Detected lcore 45 as core 11 on socket 1 00:05:42.300 EAL: Detected lcore 46 as core 12 on socket 1 00:05:42.300 EAL: Detected lcore 47 as core 13 on socket 1 00:05:42.300 EAL: Maximum logical cores by configuration: 128 00:05:42.300 EAL: Detected CPU lcores: 48 00:05:42.300 EAL: Detected NUMA nodes: 2 00:05:42.300 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:42.300 EAL: Detected shared linkage of DPDK 00:05:42.300 EAL: No shared files mode enabled, IPC will be disabled 00:05:42.300 EAL: Bus pci wants IOVA as 'DC' 00:05:42.300 EAL: Buses did not request a specific IOVA mode. 00:05:42.300 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:42.300 EAL: Selected IOVA mode 'VA' 00:05:42.300 EAL: Probing VFIO support... 00:05:42.300 EAL: IOMMU type 1 (Type 1) is supported 00:05:42.300 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:42.300 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:42.300 EAL: VFIO support initialized 00:05:42.300 EAL: Ask a virtual area of 0x2e000 bytes 00:05:42.300 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:42.300 EAL: Setting up physically contiguous memory... 00:05:42.300 EAL: Setting maximum number of open files to 524288 00:05:42.300 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:42.300 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:42.300 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:42.300 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.300 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:42.300 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.300 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.300 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:42.300 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:42.300 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.300 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:42.300 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.300 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.300 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:42.300 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:42.300 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.300 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:42.300 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.300 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.300 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:42.300 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:42.300 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.300 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:42.300 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.300 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.300 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:42.300 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:42.300 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:42.300 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.300 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:42.300 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.300 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.300 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:42.300 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:42.300 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.300 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:42.300 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.300 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.300 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:42.300 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:42.300 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.300 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:42.300 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.300 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.300 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:42.300 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:42.300 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.300 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:42.300 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.300 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.300 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:42.300 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:42.300 EAL: Hugepages will be freed exactly as allocated. 00:05:42.300 EAL: No shared files mode enabled, IPC is disabled 00:05:42.300 EAL: No shared files mode enabled, IPC is disabled 00:05:42.300 EAL: TSC frequency is ~2700000 KHz 00:05:42.300 EAL: Main lcore 0 is ready (tid=7effca94da00;cpuset=[0]) 00:05:42.300 EAL: Trying to obtain current memory policy. 00:05:42.300 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.300 EAL: Restoring previous memory policy: 0 00:05:42.300 EAL: request: mp_malloc_sync 00:05:42.300 EAL: No shared files mode enabled, IPC is disabled 00:05:42.300 EAL: Heap on socket 0 was expanded by 2MB 00:05:42.300 EAL: No shared files mode enabled, IPC is disabled 00:05:42.558 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:42.558 EAL: Mem event callback 'spdk:(nil)' registered 00:05:42.558 00:05:42.558 00:05:42.558 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.558 http://cunit.sourceforge.net/ 00:05:42.558 00:05:42.558 00:05:42.558 Suite: components_suite 00:05:42.558 Test: vtophys_malloc_test ...passed 00:05:42.558 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:42.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.558 EAL: Restoring previous memory policy: 4 00:05:42.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.558 EAL: request: mp_malloc_sync 00:05:42.558 EAL: No shared files mode enabled, IPC is disabled 00:05:42.558 EAL: Heap on socket 0 was expanded by 4MB 00:05:42.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.558 EAL: request: mp_malloc_sync 00:05:42.558 EAL: No shared files mode enabled, IPC is disabled 00:05:42.558 EAL: Heap on socket 0 was shrunk by 4MB 00:05:42.558 EAL: Trying to obtain current memory policy. 00:05:42.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.558 EAL: Restoring previous memory policy: 4 00:05:42.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.558 EAL: request: mp_malloc_sync 00:05:42.558 EAL: No shared files mode enabled, IPC is disabled 00:05:42.558 EAL: Heap on socket 0 was expanded by 6MB 00:05:42.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.558 EAL: request: mp_malloc_sync 00:05:42.558 EAL: No shared files mode enabled, IPC is disabled 00:05:42.558 EAL: Heap on socket 0 was shrunk by 6MB 00:05:42.558 EAL: Trying to obtain current memory policy. 00:05:42.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.558 EAL: Restoring previous memory policy: 4 00:05:42.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.558 EAL: request: mp_malloc_sync 00:05:42.558 EAL: No shared files mode enabled, IPC is disabled 00:05:42.558 EAL: Heap on socket 0 was expanded by 10MB 00:05:42.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.558 EAL: request: mp_malloc_sync 00:05:42.558 EAL: No shared files mode enabled, IPC is disabled 00:05:42.558 EAL: Heap on socket 0 was shrunk by 10MB 00:05:42.558 EAL: Trying to obtain current memory policy. 00:05:42.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.558 EAL: Restoring previous memory policy: 4 00:05:42.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.558 EAL: request: mp_malloc_sync 00:05:42.558 EAL: No shared files mode enabled, IPC is disabled 00:05:42.558 EAL: Heap on socket 0 was expanded by 18MB 00:05:42.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.558 EAL: request: mp_malloc_sync 00:05:42.558 EAL: No shared files mode enabled, IPC is disabled 00:05:42.558 EAL: Heap on socket 0 was shrunk by 18MB 00:05:42.558 EAL: Trying to obtain current memory policy. 00:05:42.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.558 EAL: Restoring previous memory policy: 4 00:05:42.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.558 EAL: request: mp_malloc_sync 00:05:42.558 EAL: No shared files mode enabled, IPC is disabled 00:05:42.558 EAL: Heap on socket 0 was expanded by 34MB 00:05:42.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.558 EAL: request: mp_malloc_sync 00:05:42.558 EAL: No shared files mode enabled, IPC is disabled 00:05:42.558 EAL: Heap on socket 0 was shrunk by 34MB 00:05:42.558 EAL: Trying to obtain current memory policy. 00:05:42.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.558 EAL: Restoring previous memory policy: 4 00:05:42.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.558 EAL: request: mp_malloc_sync 00:05:42.558 EAL: No shared files mode enabled, IPC is disabled 00:05:42.558 EAL: Heap on socket 0 was expanded by 66MB 00:05:42.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.558 EAL: request: mp_malloc_sync 00:05:42.558 EAL: No shared files mode enabled, IPC is disabled 00:05:42.558 EAL: Heap on socket 0 was shrunk by 66MB 00:05:42.558 EAL: Trying to obtain current memory policy. 00:05:42.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.558 EAL: Restoring previous memory policy: 4 00:05:42.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.558 EAL: request: mp_malloc_sync 00:05:42.558 EAL: No shared files mode enabled, IPC is disabled 00:05:42.558 EAL: Heap on socket 0 was expanded by 130MB 00:05:42.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.558 EAL: request: mp_malloc_sync 00:05:42.558 EAL: No shared files mode enabled, IPC is disabled 00:05:42.558 EAL: Heap on socket 0 was shrunk by 130MB 00:05:42.558 EAL: Trying to obtain current memory policy. 00:05:42.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.558 EAL: Restoring previous memory policy: 4 00:05:42.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.558 EAL: request: mp_malloc_sync 00:05:42.558 EAL: No shared files mode enabled, IPC is disabled 00:05:42.558 EAL: Heap on socket 0 was expanded by 258MB 00:05:42.816 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.816 EAL: request: mp_malloc_sync 00:05:42.816 EAL: No shared files mode enabled, IPC is disabled 00:05:42.816 EAL: Heap on socket 0 was shrunk by 258MB 00:05:42.816 EAL: Trying to obtain current memory policy. 00:05:42.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.816 EAL: Restoring previous memory policy: 4 00:05:42.816 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.816 EAL: request: mp_malloc_sync 00:05:42.816 EAL: No shared files mode enabled, IPC is disabled 00:05:42.816 EAL: Heap on socket 0 was expanded by 514MB 00:05:43.074 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.074 EAL: request: mp_malloc_sync 00:05:43.074 EAL: No shared files mode enabled, IPC is disabled 00:05:43.074 EAL: Heap on socket 0 was shrunk by 514MB 00:05:43.074 EAL: Trying to obtain current memory policy. 00:05:43.074 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.331 EAL: Restoring previous memory policy: 4 00:05:43.331 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.331 EAL: request: mp_malloc_sync 00:05:43.331 EAL: No shared files mode enabled, IPC is disabled 00:05:43.331 EAL: Heap on socket 0 was expanded by 1026MB 00:05:43.589 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.862 EAL: request: mp_malloc_sync 00:05:43.862 EAL: No shared files mode enabled, IPC is disabled 00:05:43.862 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:43.862 passed 00:05:43.862 00:05:43.862 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.862 suites 1 1 n/a 0 0 00:05:43.862 tests 2 2 2 0 0 00:05:43.862 asserts 497 497 497 0 n/a 00:05:43.862 00:05:43.862 Elapsed time = 1.312 seconds 00:05:43.862 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.862 EAL: request: mp_malloc_sync 00:05:43.862 EAL: No shared files mode enabled, IPC is disabled 00:05:43.862 EAL: Heap on socket 0 was shrunk by 2MB 00:05:43.862 EAL: No shared files mode enabled, IPC is disabled 00:05:43.862 EAL: No shared files mode enabled, IPC is disabled 00:05:43.862 EAL: No shared files mode enabled, IPC is disabled 00:05:43.862 00:05:43.862 real 0m1.431s 00:05:43.862 user 0m0.833s 00:05:43.862 sys 0m0.568s 00:05:43.862 19:04:28 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.862 19:04:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:43.862 ************************************ 00:05:43.862 END TEST env_vtophys 00:05:43.862 ************************************ 00:05:43.862 19:04:28 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:43.862 19:04:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.862 19:04:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.862 19:04:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.862 ************************************ 00:05:43.862 START TEST env_pci 00:05:43.862 ************************************ 00:05:43.862 19:04:28 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:43.862 00:05:43.862 00:05:43.862 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.862 http://cunit.sourceforge.net/ 00:05:43.862 00:05:43.862 00:05:43.862 Suite: pci 00:05:43.862 Test: pci_hook ...[2024-12-06 19:04:28.781259] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 85748 has claimed it 00:05:43.862 EAL: Cannot find device (10000:00:01.0) 00:05:43.862 EAL: Failed to attach device on primary process 00:05:43.862 passed 00:05:43.862 00:05:43.862 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.862 suites 1 1 n/a 0 0 00:05:43.862 tests 1 1 1 0 0 00:05:43.862 asserts 25 25 25 0 n/a 00:05:43.862 00:05:43.862 Elapsed time = 0.024 seconds 00:05:43.862 00:05:43.862 real 0m0.037s 00:05:43.862 user 0m0.010s 00:05:43.862 sys 0m0.027s 00:05:43.862 19:04:28 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.862 19:04:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:43.862 ************************************ 00:05:43.862 END TEST env_pci 00:05:43.862 ************************************ 00:05:43.862 19:04:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:43.862 19:04:28 env -- env/env.sh@15 -- # uname 00:05:43.862 19:04:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:43.862 19:04:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:43.862 19:04:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.862 19:04:28 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:43.862 19:04:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.862 19:04:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.862 ************************************ 00:05:43.862 START TEST env_dpdk_post_init 00:05:43.862 ************************************ 00:05:43.862 19:04:28 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.862 EAL: Detected CPU lcores: 48 00:05:43.862 EAL: Detected NUMA nodes: 2 00:05:43.862 EAL: Detected shared linkage of DPDK 00:05:43.862 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.862 EAL: Selected IOVA mode 'VA' 00:05:43.862 EAL: VFIO support initialized 00:05:43.862 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:44.122 EAL: Using IOMMU type 1 (Type 1) 00:05:44.122 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:44.122 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:44.123 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:44.123 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:44.123 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:44.123 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:44.123 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:44.123 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:44.123 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:44.123 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:44.123 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:44.123 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:44.123 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:44.123 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:44.123 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:44.123 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:45.064 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:05:48.349 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:05:48.349 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:05:48.349 Starting DPDK initialization... 00:05:48.349 Starting SPDK post initialization... 00:05:48.349 SPDK NVMe probe 00:05:48.349 Attaching to 0000:82:00.0 00:05:48.349 Attached to 0000:82:00.0 00:05:48.349 Cleaning up... 00:05:48.350 00:05:48.350 real 0m4.445s 00:05:48.350 user 0m3.052s 00:05:48.350 sys 0m0.453s 00:05:48.350 19:04:33 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.350 19:04:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.350 ************************************ 00:05:48.350 END TEST env_dpdk_post_init 00:05:48.350 ************************************ 00:05:48.350 19:04:33 env -- env/env.sh@26 -- # uname 00:05:48.350 19:04:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:48.350 19:04:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:48.350 19:04:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.350 19:04:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.350 19:04:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.350 ************************************ 00:05:48.350 START TEST env_mem_callbacks 00:05:48.350 ************************************ 00:05:48.350 19:04:33 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:48.350 EAL: Detected CPU lcores: 48 00:05:48.350 EAL: Detected NUMA nodes: 2 00:05:48.350 EAL: Detected shared linkage of DPDK 00:05:48.350 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:48.350 EAL: Selected IOVA mode 'VA' 00:05:48.350 EAL: VFIO support initialized 00:05:48.609 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:48.609 00:05:48.609 00:05:48.609 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.609 http://cunit.sourceforge.net/ 00:05:48.609 00:05:48.609 00:05:48.609 Suite: memory 00:05:48.609 Test: test ... 00:05:48.609 register 0x200000200000 2097152 00:05:48.609 malloc 3145728 00:05:48.609 register 0x200000400000 4194304 00:05:48.609 buf 0x200000500000 len 3145728 PASSED 00:05:48.609 malloc 64 00:05:48.609 buf 0x2000004fff40 len 64 PASSED 00:05:48.609 malloc 4194304 00:05:48.609 register 0x200000800000 6291456 00:05:48.609 buf 0x200000a00000 len 4194304 PASSED 00:05:48.609 free 0x200000500000 3145728 00:05:48.609 free 0x2000004fff40 64 00:05:48.609 unregister 0x200000400000 4194304 PASSED 00:05:48.609 free 0x200000a00000 4194304 00:05:48.609 unregister 0x200000800000 6291456 PASSED 00:05:48.609 malloc 8388608 00:05:48.609 register 0x200000400000 10485760 00:05:48.609 buf 0x200000600000 len 8388608 PASSED 00:05:48.609 free 0x200000600000 8388608 00:05:48.609 unregister 0x200000400000 10485760 PASSED 00:05:48.609 passed 00:05:48.609 00:05:48.609 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.609 suites 1 1 n/a 0 0 00:05:48.609 tests 1 1 1 0 0 00:05:48.609 asserts 15 15 15 0 n/a 00:05:48.609 00:05:48.609 Elapsed time = 0.005 seconds 00:05:48.609 00:05:48.609 real 0m0.050s 00:05:48.609 user 0m0.016s 00:05:48.609 sys 0m0.034s 00:05:48.609 19:04:33 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.609 19:04:33 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:48.609 ************************************ 00:05:48.609 END TEST env_mem_callbacks 00:05:48.609 ************************************ 00:05:48.609 00:05:48.609 real 0m6.511s 00:05:48.609 user 0m4.256s 00:05:48.609 sys 0m1.304s 00:05:48.609 19:04:33 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.609 19:04:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.609 ************************************ 00:05:48.609 END TEST env 00:05:48.609 ************************************ 00:05:48.609 19:04:33 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:48.609 19:04:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.609 19:04:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.609 19:04:33 -- common/autotest_common.sh@10 -- # set +x 00:05:48.609 ************************************ 00:05:48.609 START TEST rpc 00:05:48.609 ************************************ 00:05:48.609 19:04:33 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:48.609 * Looking for test storage... 00:05:48.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:48.609 19:04:33 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:48.609 19:04:33 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:48.609 19:04:33 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:48.609 19:04:33 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:48.609 19:04:33 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.609 19:04:33 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.609 19:04:33 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.609 19:04:33 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.609 19:04:33 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.609 19:04:33 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.609 19:04:33 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.609 19:04:33 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.609 19:04:33 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.609 19:04:33 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.609 19:04:33 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.609 19:04:33 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:48.609 19:04:33 rpc -- scripts/common.sh@345 -- # : 1 00:05:48.609 19:04:33 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.609 19:04:33 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.609 19:04:33 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:48.609 19:04:33 rpc -- scripts/common.sh@353 -- # local d=1 00:05:48.609 19:04:33 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.609 19:04:33 rpc -- scripts/common.sh@355 -- # echo 1 00:05:48.609 19:04:33 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.609 19:04:33 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:48.609 19:04:33 rpc -- scripts/common.sh@353 -- # local d=2 00:05:48.609 19:04:33 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.609 19:04:33 rpc -- scripts/common.sh@355 -- # echo 2 00:05:48.609 19:04:33 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.609 19:04:33 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.609 19:04:33 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.609 19:04:33 rpc -- scripts/common.sh@368 -- # return 0 00:05:48.609 19:04:33 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.609 19:04:33 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:48.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.609 --rc genhtml_branch_coverage=1 00:05:48.609 --rc genhtml_function_coverage=1 00:05:48.609 --rc genhtml_legend=1 00:05:48.609 --rc geninfo_all_blocks=1 00:05:48.609 --rc geninfo_unexecuted_blocks=1 00:05:48.609 00:05:48.609 ' 00:05:48.609 19:04:33 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:48.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.609 --rc genhtml_branch_coverage=1 00:05:48.609 --rc genhtml_function_coverage=1 00:05:48.609 --rc genhtml_legend=1 00:05:48.609 --rc geninfo_all_blocks=1 00:05:48.609 --rc geninfo_unexecuted_blocks=1 00:05:48.609 00:05:48.609 ' 00:05:48.609 19:04:33 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:48.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.609 --rc genhtml_branch_coverage=1 00:05:48.609 --rc genhtml_function_coverage=1 00:05:48.610 --rc genhtml_legend=1 00:05:48.610 --rc geninfo_all_blocks=1 00:05:48.610 --rc geninfo_unexecuted_blocks=1 00:05:48.610 00:05:48.610 ' 00:05:48.610 19:04:33 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:48.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.610 --rc genhtml_branch_coverage=1 00:05:48.610 --rc genhtml_function_coverage=1 00:05:48.610 --rc genhtml_legend=1 00:05:48.610 --rc geninfo_all_blocks=1 00:05:48.610 --rc geninfo_unexecuted_blocks=1 00:05:48.610 00:05:48.610 ' 00:05:48.610 19:04:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=86524 00:05:48.610 19:04:33 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:48.610 19:04:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.610 19:04:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 86524 00:05:48.610 19:04:33 rpc -- common/autotest_common.sh@835 -- # '[' -z 86524 ']' 00:05:48.610 19:04:33 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.610 19:04:33 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.610 19:04:33 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.610 19:04:33 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.610 19:04:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.869 [2024-12-06 19:04:33.677838] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:05:48.869 [2024-12-06 19:04:33.677917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86524 ] 00:05:48.869 [2024-12-06 19:04:33.745327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.869 [2024-12-06 19:04:33.800635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:48.869 [2024-12-06 19:04:33.800699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 86524' to capture a snapshot of events at runtime. 00:05:48.869 [2024-12-06 19:04:33.800736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:48.869 [2024-12-06 19:04:33.800749] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:48.869 [2024-12-06 19:04:33.800759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid86524 for offline analysis/debug. 00:05:48.869 [2024-12-06 19:04:33.801364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.128 19:04:34 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.128 19:04:34 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:49.128 19:04:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:49.129 19:04:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:49.129 19:04:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:49.129 19:04:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:49.129 19:04:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.129 19:04:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.129 19:04:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.129 ************************************ 00:05:49.129 START TEST rpc_integrity 00:05:49.129 ************************************ 00:05:49.129 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:49.129 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:49.129 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.129 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.129 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.129 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:49.129 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:49.129 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:49.129 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:49.129 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.129 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.129 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.129 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:49.129 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:49.129 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.129 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.129 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.129 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:49.129 { 00:05:49.129 "name": "Malloc0", 00:05:49.129 "aliases": [ 00:05:49.129 "66231d84-b8dd-4a85-b5f7-e0b5a88d8ef1" 00:05:49.129 ], 00:05:49.129 "product_name": "Malloc disk", 00:05:49.129 "block_size": 512, 00:05:49.129 "num_blocks": 16384, 00:05:49.129 "uuid": "66231d84-b8dd-4a85-b5f7-e0b5a88d8ef1", 00:05:49.129 "assigned_rate_limits": { 00:05:49.129 "rw_ios_per_sec": 0, 00:05:49.129 "rw_mbytes_per_sec": 0, 00:05:49.129 "r_mbytes_per_sec": 0, 00:05:49.129 "w_mbytes_per_sec": 0 00:05:49.129 }, 00:05:49.129 "claimed": false, 00:05:49.129 "zoned": false, 00:05:49.129 "supported_io_types": { 00:05:49.129 "read": true, 00:05:49.129 "write": true, 00:05:49.129 "unmap": true, 00:05:49.129 "flush": true, 00:05:49.129 "reset": true, 00:05:49.129 "nvme_admin": false, 00:05:49.129 "nvme_io": false, 00:05:49.129 "nvme_io_md": false, 00:05:49.129 "write_zeroes": true, 00:05:49.129 "zcopy": true, 00:05:49.129 "get_zone_info": false, 00:05:49.129 "zone_management": false, 00:05:49.129 "zone_append": false, 00:05:49.129 "compare": false, 00:05:49.129 "compare_and_write": false, 00:05:49.129 "abort": true, 00:05:49.129 "seek_hole": false, 00:05:49.129 "seek_data": false, 00:05:49.129 "copy": true, 00:05:49.129 "nvme_iov_md": false 00:05:49.129 }, 00:05:49.129 "memory_domains": [ 00:05:49.129 { 00:05:49.129 "dma_device_id": "system", 00:05:49.129 "dma_device_type": 1 00:05:49.129 }, 00:05:49.129 { 00:05:49.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.129 "dma_device_type": 2 00:05:49.129 } 00:05:49.129 ], 00:05:49.129 "driver_specific": {} 00:05:49.129 } 00:05:49.129 ]' 00:05:49.129 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:49.389 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:49.389 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:49.389 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.389 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.389 [2024-12-06 19:04:34.181968] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:49.389 [2024-12-06 19:04:34.182026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.389 [2024-12-06 19:04:34.182049] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf4b6a0 00:05:49.389 [2024-12-06 19:04:34.182078] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.389 [2024-12-06 19:04:34.183428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.389 [2024-12-06 19:04:34.183450] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:49.389 Passthru0 00:05:49.389 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.389 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:49.389 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.389 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.389 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.389 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:49.389 { 00:05:49.389 "name": "Malloc0", 00:05:49.389 "aliases": [ 00:05:49.389 "66231d84-b8dd-4a85-b5f7-e0b5a88d8ef1" 00:05:49.389 ], 00:05:49.389 "product_name": "Malloc disk", 00:05:49.389 "block_size": 512, 00:05:49.389 "num_blocks": 16384, 00:05:49.389 "uuid": "66231d84-b8dd-4a85-b5f7-e0b5a88d8ef1", 00:05:49.389 "assigned_rate_limits": { 00:05:49.389 "rw_ios_per_sec": 0, 00:05:49.389 "rw_mbytes_per_sec": 0, 00:05:49.389 "r_mbytes_per_sec": 0, 00:05:49.389 "w_mbytes_per_sec": 0 00:05:49.389 }, 00:05:49.389 "claimed": true, 00:05:49.389 "claim_type": "exclusive_write", 00:05:49.389 "zoned": false, 00:05:49.389 "supported_io_types": { 00:05:49.389 "read": true, 00:05:49.389 "write": true, 00:05:49.389 "unmap": true, 00:05:49.389 "flush": true, 00:05:49.389 "reset": true, 00:05:49.389 "nvme_admin": false, 00:05:49.389 "nvme_io": false, 00:05:49.389 "nvme_io_md": false, 00:05:49.389 "write_zeroes": true, 00:05:49.389 "zcopy": true, 00:05:49.389 "get_zone_info": false, 00:05:49.389 "zone_management": false, 00:05:49.389 "zone_append": false, 00:05:49.389 "compare": false, 00:05:49.389 "compare_and_write": false, 00:05:49.389 "abort": true, 00:05:49.389 "seek_hole": false, 00:05:49.389 "seek_data": false, 00:05:49.389 "copy": true, 00:05:49.389 "nvme_iov_md": false 00:05:49.389 }, 00:05:49.389 "memory_domains": [ 00:05:49.389 { 00:05:49.389 "dma_device_id": "system", 00:05:49.389 "dma_device_type": 1 00:05:49.389 }, 00:05:49.389 { 00:05:49.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.389 "dma_device_type": 2 00:05:49.389 } 00:05:49.389 ], 00:05:49.389 "driver_specific": {} 00:05:49.389 }, 00:05:49.389 { 00:05:49.389 "name": "Passthru0", 00:05:49.389 "aliases": [ 00:05:49.389 "44f21c68-ca9f-52a6-9866-448b5060b994" 00:05:49.389 ], 00:05:49.389 "product_name": "passthru", 00:05:49.389 "block_size": 512, 00:05:49.389 "num_blocks": 16384, 00:05:49.389 "uuid": "44f21c68-ca9f-52a6-9866-448b5060b994", 00:05:49.389 "assigned_rate_limits": { 00:05:49.389 "rw_ios_per_sec": 0, 00:05:49.389 "rw_mbytes_per_sec": 0, 00:05:49.389 "r_mbytes_per_sec": 0, 00:05:49.389 "w_mbytes_per_sec": 0 00:05:49.389 }, 00:05:49.389 "claimed": false, 00:05:49.389 "zoned": false, 00:05:49.389 "supported_io_types": { 00:05:49.389 "read": true, 00:05:49.389 "write": true, 00:05:49.389 "unmap": true, 00:05:49.389 "flush": true, 00:05:49.389 "reset": true, 00:05:49.389 "nvme_admin": false, 00:05:49.389 "nvme_io": false, 00:05:49.389 "nvme_io_md": false, 00:05:49.389 "write_zeroes": true, 00:05:49.389 "zcopy": true, 00:05:49.389 "get_zone_info": false, 00:05:49.389 "zone_management": false, 00:05:49.389 "zone_append": false, 00:05:49.389 "compare": false, 00:05:49.389 "compare_and_write": false, 00:05:49.389 "abort": true, 00:05:49.389 "seek_hole": false, 00:05:49.389 "seek_data": false, 00:05:49.389 "copy": true, 00:05:49.389 "nvme_iov_md": false 00:05:49.389 }, 00:05:49.389 "memory_domains": [ 00:05:49.389 { 00:05:49.389 "dma_device_id": "system", 00:05:49.389 "dma_device_type": 1 00:05:49.389 }, 00:05:49.389 { 00:05:49.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.389 "dma_device_type": 2 00:05:49.389 } 00:05:49.389 ], 00:05:49.389 "driver_specific": { 00:05:49.389 "passthru": { 00:05:49.389 "name": "Passthru0", 00:05:49.389 "base_bdev_name": "Malloc0" 00:05:49.389 } 00:05:49.389 } 00:05:49.389 } 00:05:49.389 ]' 00:05:49.389 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:49.389 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.389 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.389 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.389 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.389 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.389 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:49.389 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.389 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.389 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.389 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.389 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.389 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.389 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.389 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.389 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:49.389 19:04:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.389 00:05:49.389 real 0m0.217s 00:05:49.389 user 0m0.133s 00:05:49.389 sys 0m0.026s 00:05:49.389 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.389 19:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.389 ************************************ 00:05:49.389 END TEST rpc_integrity 00:05:49.389 ************************************ 00:05:49.389 19:04:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:49.389 19:04:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.389 19:04:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.389 19:04:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.389 ************************************ 00:05:49.389 START TEST rpc_plugins 00:05:49.389 ************************************ 00:05:49.389 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:49.389 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:49.389 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.389 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.389 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.389 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:49.389 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:49.389 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.389 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.389 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.389 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:49.389 { 00:05:49.389 "name": "Malloc1", 00:05:49.389 "aliases": [ 00:05:49.389 "c86f57cf-b64d-4b79-88b2-87f772d0fcfd" 00:05:49.389 ], 00:05:49.389 "product_name": "Malloc disk", 00:05:49.389 "block_size": 4096, 00:05:49.389 "num_blocks": 256, 00:05:49.389 "uuid": "c86f57cf-b64d-4b79-88b2-87f772d0fcfd", 00:05:49.389 "assigned_rate_limits": { 00:05:49.389 "rw_ios_per_sec": 0, 00:05:49.389 "rw_mbytes_per_sec": 0, 00:05:49.389 "r_mbytes_per_sec": 0, 00:05:49.389 "w_mbytes_per_sec": 0 00:05:49.389 }, 00:05:49.389 "claimed": false, 00:05:49.389 "zoned": false, 00:05:49.389 "supported_io_types": { 00:05:49.389 "read": true, 00:05:49.389 "write": true, 00:05:49.389 "unmap": true, 00:05:49.389 "flush": true, 00:05:49.389 "reset": true, 00:05:49.389 "nvme_admin": false, 00:05:49.389 "nvme_io": false, 00:05:49.389 "nvme_io_md": false, 00:05:49.389 "write_zeroes": true, 00:05:49.389 "zcopy": true, 00:05:49.389 "get_zone_info": false, 00:05:49.389 "zone_management": false, 00:05:49.390 "zone_append": false, 00:05:49.390 "compare": false, 00:05:49.390 "compare_and_write": false, 00:05:49.390 "abort": true, 00:05:49.390 "seek_hole": false, 00:05:49.390 "seek_data": false, 00:05:49.390 "copy": true, 00:05:49.390 "nvme_iov_md": false 00:05:49.390 }, 00:05:49.390 "memory_domains": [ 00:05:49.390 { 00:05:49.390 "dma_device_id": "system", 00:05:49.390 "dma_device_type": 1 00:05:49.390 }, 00:05:49.390 { 00:05:49.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.390 "dma_device_type": 2 00:05:49.390 } 00:05:49.390 ], 00:05:49.390 "driver_specific": {} 00:05:49.390 } 00:05:49.390 ]' 00:05:49.390 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:49.390 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:49.390 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:49.390 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.390 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.390 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.390 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:49.390 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.390 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.390 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.390 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:49.390 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:49.648 19:04:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:49.648 00:05:49.648 real 0m0.107s 00:05:49.648 user 0m0.069s 00:05:49.648 sys 0m0.008s 00:05:49.648 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.648 19:04:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.648 ************************************ 00:05:49.648 END TEST rpc_plugins 00:05:49.648 ************************************ 00:05:49.648 19:04:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:49.648 19:04:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.648 19:04:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.648 19:04:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.648 ************************************ 00:05:49.648 START TEST rpc_trace_cmd_test 00:05:49.648 ************************************ 00:05:49.648 19:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:49.648 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:49.648 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:49.648 19:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.648 19:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.648 19:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.648 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:49.648 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid86524", 00:05:49.648 "tpoint_group_mask": "0x8", 00:05:49.648 "iscsi_conn": { 00:05:49.648 "mask": "0x2", 00:05:49.648 "tpoint_mask": "0x0" 00:05:49.648 }, 00:05:49.648 "scsi": { 00:05:49.648 "mask": "0x4", 00:05:49.648 "tpoint_mask": "0x0" 00:05:49.648 }, 00:05:49.648 "bdev": { 00:05:49.648 "mask": "0x8", 00:05:49.648 "tpoint_mask": "0xffffffffffffffff" 00:05:49.648 }, 00:05:49.648 "nvmf_rdma": { 00:05:49.648 "mask": "0x10", 00:05:49.648 "tpoint_mask": "0x0" 00:05:49.648 }, 00:05:49.648 "nvmf_tcp": { 00:05:49.648 "mask": "0x20", 00:05:49.648 "tpoint_mask": "0x0" 00:05:49.648 }, 00:05:49.648 "ftl": { 00:05:49.648 "mask": "0x40", 00:05:49.648 "tpoint_mask": "0x0" 00:05:49.648 }, 00:05:49.648 "blobfs": { 00:05:49.648 "mask": "0x80", 00:05:49.648 "tpoint_mask": "0x0" 00:05:49.648 }, 00:05:49.648 "dsa": { 00:05:49.648 "mask": "0x200", 00:05:49.648 "tpoint_mask": "0x0" 00:05:49.649 }, 00:05:49.649 "thread": { 00:05:49.649 "mask": "0x400", 00:05:49.649 "tpoint_mask": "0x0" 00:05:49.649 }, 00:05:49.649 "nvme_pcie": { 00:05:49.649 "mask": "0x800", 00:05:49.649 "tpoint_mask": "0x0" 00:05:49.649 }, 00:05:49.649 "iaa": { 00:05:49.649 "mask": "0x1000", 00:05:49.649 "tpoint_mask": "0x0" 00:05:49.649 }, 00:05:49.649 "nvme_tcp": { 00:05:49.649 "mask": "0x2000", 00:05:49.649 "tpoint_mask": "0x0" 00:05:49.649 }, 00:05:49.649 "bdev_nvme": { 00:05:49.649 "mask": "0x4000", 00:05:49.649 "tpoint_mask": "0x0" 00:05:49.649 }, 00:05:49.649 "sock": { 00:05:49.649 "mask": "0x8000", 00:05:49.649 "tpoint_mask": "0x0" 00:05:49.649 }, 00:05:49.649 "blob": { 00:05:49.649 "mask": "0x10000", 00:05:49.649 "tpoint_mask": "0x0" 00:05:49.649 }, 00:05:49.649 "bdev_raid": { 00:05:49.649 "mask": "0x20000", 00:05:49.649 "tpoint_mask": "0x0" 00:05:49.649 }, 00:05:49.649 "scheduler": { 00:05:49.649 "mask": "0x40000", 00:05:49.649 "tpoint_mask": "0x0" 00:05:49.649 } 00:05:49.649 }' 00:05:49.649 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:49.649 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:49.649 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:49.649 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:49.649 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:49.649 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:49.649 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:49.649 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:49.649 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:49.649 19:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:49.649 00:05:49.649 real 0m0.182s 00:05:49.649 user 0m0.161s 00:05:49.649 sys 0m0.013s 00:05:49.649 19:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.649 19:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.649 ************************************ 00:05:49.649 END TEST rpc_trace_cmd_test 00:05:49.649 ************************************ 00:05:49.908 19:04:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:49.908 19:04:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:49.908 19:04:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:49.908 19:04:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.908 19:04:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.908 19:04:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.908 ************************************ 00:05:49.908 START TEST rpc_daemon_integrity 00:05:49.908 ************************************ 00:05:49.908 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:49.908 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:49.908 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.908 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.908 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.908 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:49.908 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:49.908 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:49.908 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:49.908 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.908 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.908 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:49.909 { 00:05:49.909 "name": "Malloc2", 00:05:49.909 "aliases": [ 00:05:49.909 "103a2bfd-dc95-40c8-b60e-bb38f09fe262" 00:05:49.909 ], 00:05:49.909 "product_name": "Malloc disk", 00:05:49.909 "block_size": 512, 00:05:49.909 "num_blocks": 16384, 00:05:49.909 "uuid": "103a2bfd-dc95-40c8-b60e-bb38f09fe262", 00:05:49.909 "assigned_rate_limits": { 00:05:49.909 "rw_ios_per_sec": 0, 00:05:49.909 "rw_mbytes_per_sec": 0, 00:05:49.909 "r_mbytes_per_sec": 0, 00:05:49.909 "w_mbytes_per_sec": 0 00:05:49.909 }, 00:05:49.909 "claimed": false, 00:05:49.909 "zoned": false, 00:05:49.909 "supported_io_types": { 00:05:49.909 "read": true, 00:05:49.909 "write": true, 00:05:49.909 "unmap": true, 00:05:49.909 "flush": true, 00:05:49.909 "reset": true, 00:05:49.909 "nvme_admin": false, 00:05:49.909 "nvme_io": false, 00:05:49.909 "nvme_io_md": false, 00:05:49.909 "write_zeroes": true, 00:05:49.909 "zcopy": true, 00:05:49.909 "get_zone_info": false, 00:05:49.909 "zone_management": false, 00:05:49.909 "zone_append": false, 00:05:49.909 "compare": false, 00:05:49.909 "compare_and_write": false, 00:05:49.909 "abort": true, 00:05:49.909 "seek_hole": false, 00:05:49.909 "seek_data": false, 00:05:49.909 "copy": true, 00:05:49.909 "nvme_iov_md": false 00:05:49.909 }, 00:05:49.909 "memory_domains": [ 00:05:49.909 { 00:05:49.909 "dma_device_id": "system", 00:05:49.909 "dma_device_type": 1 00:05:49.909 }, 00:05:49.909 { 00:05:49.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.909 "dma_device_type": 2 00:05:49.909 } 00:05:49.909 ], 00:05:49.909 "driver_specific": {} 00:05:49.909 } 00:05:49.909 ]' 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.909 [2024-12-06 19:04:34.824150] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:49.909 [2024-12-06 19:04:34.824206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.909 [2024-12-06 19:04:34.824228] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe08cb0 00:05:49.909 [2024-12-06 19:04:34.824240] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.909 [2024-12-06 19:04:34.825439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.909 [2024-12-06 19:04:34.825462] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:49.909 Passthru0 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:49.909 { 00:05:49.909 "name": "Malloc2", 00:05:49.909 "aliases": [ 00:05:49.909 "103a2bfd-dc95-40c8-b60e-bb38f09fe262" 00:05:49.909 ], 00:05:49.909 "product_name": "Malloc disk", 00:05:49.909 "block_size": 512, 00:05:49.909 "num_blocks": 16384, 00:05:49.909 "uuid": "103a2bfd-dc95-40c8-b60e-bb38f09fe262", 00:05:49.909 "assigned_rate_limits": { 00:05:49.909 "rw_ios_per_sec": 0, 00:05:49.909 "rw_mbytes_per_sec": 0, 00:05:49.909 "r_mbytes_per_sec": 0, 00:05:49.909 "w_mbytes_per_sec": 0 00:05:49.909 }, 00:05:49.909 "claimed": true, 00:05:49.909 "claim_type": "exclusive_write", 00:05:49.909 "zoned": false, 00:05:49.909 "supported_io_types": { 00:05:49.909 "read": true, 00:05:49.909 "write": true, 00:05:49.909 "unmap": true, 00:05:49.909 "flush": true, 00:05:49.909 "reset": true, 00:05:49.909 "nvme_admin": false, 00:05:49.909 "nvme_io": false, 00:05:49.909 "nvme_io_md": false, 00:05:49.909 "write_zeroes": true, 00:05:49.909 "zcopy": true, 00:05:49.909 "get_zone_info": false, 00:05:49.909 "zone_management": false, 00:05:49.909 "zone_append": false, 00:05:49.909 "compare": false, 00:05:49.909 "compare_and_write": false, 00:05:49.909 "abort": true, 00:05:49.909 "seek_hole": false, 00:05:49.909 "seek_data": false, 00:05:49.909 "copy": true, 00:05:49.909 "nvme_iov_md": false 00:05:49.909 }, 00:05:49.909 "memory_domains": [ 00:05:49.909 { 00:05:49.909 "dma_device_id": "system", 00:05:49.909 "dma_device_type": 1 00:05:49.909 }, 00:05:49.909 { 00:05:49.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.909 "dma_device_type": 2 00:05:49.909 } 00:05:49.909 ], 00:05:49.909 "driver_specific": {} 00:05:49.909 }, 00:05:49.909 { 00:05:49.909 "name": "Passthru0", 00:05:49.909 "aliases": [ 00:05:49.909 "e82ebff6-b038-5d7b-bf4b-9307b20d7760" 00:05:49.909 ], 00:05:49.909 "product_name": "passthru", 00:05:49.909 "block_size": 512, 00:05:49.909 "num_blocks": 16384, 00:05:49.909 "uuid": "e82ebff6-b038-5d7b-bf4b-9307b20d7760", 00:05:49.909 "assigned_rate_limits": { 00:05:49.909 "rw_ios_per_sec": 0, 00:05:49.909 "rw_mbytes_per_sec": 0, 00:05:49.909 "r_mbytes_per_sec": 0, 00:05:49.909 "w_mbytes_per_sec": 0 00:05:49.909 }, 00:05:49.909 "claimed": false, 00:05:49.909 "zoned": false, 00:05:49.909 "supported_io_types": { 00:05:49.909 "read": true, 00:05:49.909 "write": true, 00:05:49.909 "unmap": true, 00:05:49.909 "flush": true, 00:05:49.909 "reset": true, 00:05:49.909 "nvme_admin": false, 00:05:49.909 "nvme_io": false, 00:05:49.909 "nvme_io_md": false, 00:05:49.909 "write_zeroes": true, 00:05:49.909 "zcopy": true, 00:05:49.909 "get_zone_info": false, 00:05:49.909 "zone_management": false, 00:05:49.909 "zone_append": false, 00:05:49.909 "compare": false, 00:05:49.909 "compare_and_write": false, 00:05:49.909 "abort": true, 00:05:49.909 "seek_hole": false, 00:05:49.909 "seek_data": false, 00:05:49.909 "copy": true, 00:05:49.909 "nvme_iov_md": false 00:05:49.909 }, 00:05:49.909 "memory_domains": [ 00:05:49.909 { 00:05:49.909 "dma_device_id": "system", 00:05:49.909 "dma_device_type": 1 00:05:49.909 }, 00:05:49.909 { 00:05:49.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.909 "dma_device_type": 2 00:05:49.909 } 00:05:49.909 ], 00:05:49.909 "driver_specific": { 00:05:49.909 "passthru": { 00:05:49.909 "name": "Passthru0", 00:05:49.909 "base_bdev_name": "Malloc2" 00:05:49.909 } 00:05:49.909 } 00:05:49.909 } 00:05:49.909 ]' 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.909 00:05:49.909 real 0m0.209s 00:05:49.909 user 0m0.131s 00:05:49.909 sys 0m0.024s 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.909 19:04:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.909 ************************************ 00:05:49.909 END TEST rpc_daemon_integrity 00:05:49.909 ************************************ 00:05:50.176 19:04:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:50.176 19:04:34 rpc -- rpc/rpc.sh@84 -- # killprocess 86524 00:05:50.176 19:04:34 rpc -- common/autotest_common.sh@954 -- # '[' -z 86524 ']' 00:05:50.176 19:04:34 rpc -- common/autotest_common.sh@958 -- # kill -0 86524 00:05:50.176 19:04:34 rpc -- common/autotest_common.sh@959 -- # uname 00:05:50.176 19:04:34 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.176 19:04:34 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86524 00:05:50.176 19:04:34 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.176 19:04:34 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.176 19:04:34 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86524' 00:05:50.176 killing process with pid 86524 00:05:50.176 19:04:34 rpc -- common/autotest_common.sh@973 -- # kill 86524 00:05:50.176 19:04:34 rpc -- common/autotest_common.sh@978 -- # wait 86524 00:05:50.435 00:05:50.435 real 0m1.920s 00:05:50.435 user 0m2.398s 00:05:50.435 sys 0m0.576s 00:05:50.435 19:04:35 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.435 19:04:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.435 ************************************ 00:05:50.435 END TEST rpc 00:05:50.435 ************************************ 00:05:50.435 19:04:35 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:50.435 19:04:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.435 19:04:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.435 19:04:35 -- common/autotest_common.sh@10 -- # set +x 00:05:50.435 ************************************ 00:05:50.435 START TEST skip_rpc 00:05:50.435 ************************************ 00:05:50.435 19:04:35 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:50.695 * Looking for test storage... 00:05:50.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:50.695 19:04:35 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.695 19:04:35 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.695 19:04:35 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.695 19:04:35 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.695 19:04:35 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:50.695 19:04:35 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.695 19:04:35 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.695 --rc genhtml_branch_coverage=1 00:05:50.695 --rc genhtml_function_coverage=1 00:05:50.695 --rc genhtml_legend=1 00:05:50.695 --rc geninfo_all_blocks=1 00:05:50.695 --rc geninfo_unexecuted_blocks=1 00:05:50.695 00:05:50.695 ' 00:05:50.695 19:04:35 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.695 --rc genhtml_branch_coverage=1 00:05:50.695 --rc genhtml_function_coverage=1 00:05:50.695 --rc genhtml_legend=1 00:05:50.695 --rc geninfo_all_blocks=1 00:05:50.695 --rc geninfo_unexecuted_blocks=1 00:05:50.695 00:05:50.695 ' 00:05:50.695 19:04:35 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.695 --rc genhtml_branch_coverage=1 00:05:50.695 --rc genhtml_function_coverage=1 00:05:50.695 --rc genhtml_legend=1 00:05:50.695 --rc geninfo_all_blocks=1 00:05:50.695 --rc geninfo_unexecuted_blocks=1 00:05:50.695 00:05:50.695 ' 00:05:50.695 19:04:35 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.695 --rc genhtml_branch_coverage=1 00:05:50.695 --rc genhtml_function_coverage=1 00:05:50.695 --rc genhtml_legend=1 00:05:50.695 --rc geninfo_all_blocks=1 00:05:50.695 --rc geninfo_unexecuted_blocks=1 00:05:50.695 00:05:50.695 ' 00:05:50.695 19:04:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:50.695 19:04:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:50.695 19:04:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:50.695 19:04:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.695 19:04:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.695 19:04:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.695 ************************************ 00:05:50.695 START TEST skip_rpc 00:05:50.695 ************************************ 00:05:50.695 19:04:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:50.695 19:04:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=86854 00:05:50.695 19:04:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:50.695 19:04:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.695 19:04:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:50.695 [2024-12-06 19:04:35.677865] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:05:50.695 [2024-12-06 19:04:35.677947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86854 ] 00:05:50.695 [2024-12-06 19:04:35.738771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.955 [2024-12-06 19:04:35.797617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 86854 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 86854 ']' 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 86854 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86854 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86854' 00:05:56.226 killing process with pid 86854 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 86854 00:05:56.226 19:04:40 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 86854 00:05:56.226 00:05:56.226 real 0m5.464s 00:05:56.226 user 0m5.161s 00:05:56.226 sys 0m0.311s 00:05:56.226 19:04:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.226 19:04:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.226 ************************************ 00:05:56.226 END TEST skip_rpc 00:05:56.226 ************************************ 00:05:56.226 19:04:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:56.226 19:04:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.226 19:04:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.226 19:04:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.226 ************************************ 00:05:56.226 START TEST skip_rpc_with_json 00:05:56.226 ************************************ 00:05:56.226 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:56.226 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:56.226 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=87547 00:05:56.226 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.226 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.226 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 87547 00:05:56.226 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 87547 ']' 00:05:56.226 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.226 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.226 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.226 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.226 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.226 [2024-12-06 19:04:41.195734] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:05:56.226 [2024-12-06 19:04:41.195801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87547 ] 00:05:56.226 [2024-12-06 19:04:41.264494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.486 [2024-12-06 19:04:41.320539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.771 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.771 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:56.771 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:56.771 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.771 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.771 [2024-12-06 19:04:41.590330] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:56.771 request: 00:05:56.771 { 00:05:56.771 "trtype": "tcp", 00:05:56.771 "method": "nvmf_get_transports", 00:05:56.771 "req_id": 1 00:05:56.771 } 00:05:56.771 Got JSON-RPC error response 00:05:56.771 response: 00:05:56.771 { 00:05:56.771 "code": -19, 00:05:56.771 "message": "No such device" 00:05:56.771 } 00:05:56.771 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:56.771 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:56.771 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.771 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.771 [2024-12-06 19:04:41.598433] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.771 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.771 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:56.771 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.771 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.771 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.771 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:56.771 { 00:05:56.771 "subsystems": [ 00:05:56.771 { 00:05:56.771 "subsystem": "fsdev", 00:05:56.771 "config": [ 00:05:56.771 { 00:05:56.771 "method": "fsdev_set_opts", 00:05:56.771 "params": { 00:05:56.771 "fsdev_io_pool_size": 65535, 00:05:56.771 "fsdev_io_cache_size": 256 00:05:56.771 } 00:05:56.771 } 00:05:56.771 ] 00:05:56.771 }, 00:05:56.771 { 00:05:56.771 "subsystem": "vfio_user_target", 00:05:56.771 "config": null 00:05:56.771 }, 00:05:56.771 { 00:05:56.771 "subsystem": "keyring", 00:05:56.771 "config": [] 00:05:56.771 }, 00:05:56.771 { 00:05:56.771 "subsystem": "iobuf", 00:05:56.771 "config": [ 00:05:56.771 { 00:05:56.771 "method": "iobuf_set_options", 00:05:56.771 "params": { 00:05:56.771 "small_pool_count": 8192, 00:05:56.771 "large_pool_count": 1024, 00:05:56.771 "small_bufsize": 8192, 00:05:56.771 "large_bufsize": 135168, 00:05:56.771 "enable_numa": false 00:05:56.771 } 00:05:56.771 } 00:05:56.771 ] 00:05:56.771 }, 00:05:56.771 { 00:05:56.771 "subsystem": "sock", 00:05:56.771 "config": [ 00:05:56.771 { 00:05:56.771 "method": "sock_set_default_impl", 00:05:56.771 "params": { 00:05:56.771 "impl_name": "posix" 00:05:56.771 } 00:05:56.771 }, 00:05:56.771 { 00:05:56.771 "method": "sock_impl_set_options", 00:05:56.771 "params": { 00:05:56.771 "impl_name": "ssl", 00:05:56.771 "recv_buf_size": 4096, 00:05:56.771 "send_buf_size": 4096, 00:05:56.771 "enable_recv_pipe": true, 00:05:56.771 "enable_quickack": false, 00:05:56.771 "enable_placement_id": 0, 00:05:56.771 "enable_zerocopy_send_server": true, 00:05:56.771 "enable_zerocopy_send_client": false, 00:05:56.771 "zerocopy_threshold": 0, 00:05:56.771 "tls_version": 0, 00:05:56.771 "enable_ktls": false 00:05:56.771 } 00:05:56.771 }, 00:05:56.771 { 00:05:56.771 "method": "sock_impl_set_options", 00:05:56.771 "params": { 00:05:56.771 "impl_name": "posix", 00:05:56.771 "recv_buf_size": 2097152, 00:05:56.771 "send_buf_size": 2097152, 00:05:56.771 "enable_recv_pipe": true, 00:05:56.771 "enable_quickack": false, 00:05:56.771 "enable_placement_id": 0, 00:05:56.771 "enable_zerocopy_send_server": true, 00:05:56.771 "enable_zerocopy_send_client": false, 00:05:56.771 "zerocopy_threshold": 0, 00:05:56.771 "tls_version": 0, 00:05:56.771 "enable_ktls": false 00:05:56.771 } 00:05:56.771 } 00:05:56.771 ] 00:05:56.771 }, 00:05:56.771 { 00:05:56.771 "subsystem": "vmd", 00:05:56.771 "config": [] 00:05:56.771 }, 00:05:56.771 { 00:05:56.771 "subsystem": "accel", 00:05:56.771 "config": [ 00:05:56.771 { 00:05:56.771 "method": "accel_set_options", 00:05:56.771 "params": { 00:05:56.771 "small_cache_size": 128, 00:05:56.771 "large_cache_size": 16, 00:05:56.771 "task_count": 2048, 00:05:56.771 "sequence_count": 2048, 00:05:56.771 "buf_count": 2048 00:05:56.771 } 00:05:56.771 } 00:05:56.771 ] 00:05:56.771 }, 00:05:56.772 { 00:05:56.772 "subsystem": "bdev", 00:05:56.772 "config": [ 00:05:56.772 { 00:05:56.772 "method": "bdev_set_options", 00:05:56.772 "params": { 00:05:56.772 "bdev_io_pool_size": 65535, 00:05:56.772 "bdev_io_cache_size": 256, 00:05:56.772 "bdev_auto_examine": true, 00:05:56.772 "iobuf_small_cache_size": 128, 00:05:56.772 "iobuf_large_cache_size": 16 00:05:56.772 } 00:05:56.772 }, 00:05:56.772 { 00:05:56.772 "method": "bdev_raid_set_options", 00:05:56.772 "params": { 00:05:56.772 "process_window_size_kb": 1024, 00:05:56.772 "process_max_bandwidth_mb_sec": 0 00:05:56.772 } 00:05:56.772 }, 00:05:56.772 { 00:05:56.772 "method": "bdev_iscsi_set_options", 00:05:56.772 "params": { 00:05:56.772 "timeout_sec": 30 00:05:56.772 } 00:05:56.772 }, 00:05:56.772 { 00:05:56.772 "method": "bdev_nvme_set_options", 00:05:56.772 "params": { 00:05:56.772 "action_on_timeout": "none", 00:05:56.772 "timeout_us": 0, 00:05:56.772 "timeout_admin_us": 0, 00:05:56.772 "keep_alive_timeout_ms": 10000, 00:05:56.772 "arbitration_burst": 0, 00:05:56.772 "low_priority_weight": 0, 00:05:56.772 "medium_priority_weight": 0, 00:05:56.772 "high_priority_weight": 0, 00:05:56.772 "nvme_adminq_poll_period_us": 10000, 00:05:56.772 "nvme_ioq_poll_period_us": 0, 00:05:56.772 "io_queue_requests": 0, 00:05:56.772 "delay_cmd_submit": true, 00:05:56.772 "transport_retry_count": 4, 00:05:56.772 "bdev_retry_count": 3, 00:05:56.772 "transport_ack_timeout": 0, 00:05:56.772 "ctrlr_loss_timeout_sec": 0, 00:05:56.772 "reconnect_delay_sec": 0, 00:05:56.772 "fast_io_fail_timeout_sec": 0, 00:05:56.772 "disable_auto_failback": false, 00:05:56.772 "generate_uuids": false, 00:05:56.772 "transport_tos": 0, 00:05:56.772 "nvme_error_stat": false, 00:05:56.772 "rdma_srq_size": 0, 00:05:56.772 "io_path_stat": false, 00:05:56.772 "allow_accel_sequence": false, 00:05:56.772 "rdma_max_cq_size": 0, 00:05:56.772 "rdma_cm_event_timeout_ms": 0, 00:05:56.772 "dhchap_digests": [ 00:05:56.772 "sha256", 00:05:56.772 "sha384", 00:05:56.772 "sha512" 00:05:56.772 ], 00:05:56.772 "dhchap_dhgroups": [ 00:05:56.772 "null", 00:05:56.772 "ffdhe2048", 00:05:56.772 "ffdhe3072", 00:05:56.772 "ffdhe4096", 00:05:56.772 "ffdhe6144", 00:05:56.772 "ffdhe8192" 00:05:56.772 ] 00:05:56.772 } 00:05:56.772 }, 00:05:56.772 { 00:05:56.772 "method": "bdev_nvme_set_hotplug", 00:05:56.772 "params": { 00:05:56.772 "period_us": 100000, 00:05:56.772 "enable": false 00:05:56.772 } 00:05:56.772 }, 00:05:56.772 { 00:05:56.772 "method": "bdev_wait_for_examine" 00:05:56.772 } 00:05:56.772 ] 00:05:56.772 }, 00:05:56.772 { 00:05:56.772 "subsystem": "scsi", 00:05:56.772 "config": null 00:05:56.772 }, 00:05:56.772 { 00:05:56.772 "subsystem": "scheduler", 00:05:56.772 "config": [ 00:05:56.772 { 00:05:56.772 "method": "framework_set_scheduler", 00:05:56.772 "params": { 00:05:56.772 "name": "static" 00:05:56.772 } 00:05:56.772 } 00:05:56.772 ] 00:05:56.772 }, 00:05:56.772 { 00:05:56.772 "subsystem": "vhost_scsi", 00:05:56.772 "config": [] 00:05:56.772 }, 00:05:56.772 { 00:05:56.772 "subsystem": "vhost_blk", 00:05:56.772 "config": [] 00:05:56.772 }, 00:05:56.772 { 00:05:56.772 "subsystem": "ublk", 00:05:56.772 "config": [] 00:05:56.772 }, 00:05:56.772 { 00:05:56.772 "subsystem": "nbd", 00:05:56.772 "config": [] 00:05:56.772 }, 00:05:56.772 { 00:05:56.772 "subsystem": "nvmf", 00:05:56.772 "config": [ 00:05:56.772 { 00:05:56.772 "method": "nvmf_set_config", 00:05:56.772 "params": { 00:05:56.772 "discovery_filter": "match_any", 00:05:56.772 "admin_cmd_passthru": { 00:05:56.772 "identify_ctrlr": false 00:05:56.772 }, 00:05:56.772 "dhchap_digests": [ 00:05:56.772 "sha256", 00:05:56.772 "sha384", 00:05:56.772 "sha512" 00:05:56.772 ], 00:05:56.772 "dhchap_dhgroups": [ 00:05:56.772 "null", 00:05:56.772 "ffdhe2048", 00:05:56.772 "ffdhe3072", 00:05:56.772 "ffdhe4096", 00:05:56.772 "ffdhe6144", 00:05:56.772 "ffdhe8192" 00:05:56.772 ] 00:05:56.772 } 00:05:56.772 }, 00:05:56.772 { 00:05:56.772 "method": "nvmf_set_max_subsystems", 00:05:56.772 "params": { 00:05:56.772 "max_subsystems": 1024 00:05:56.772 } 00:05:56.772 }, 00:05:56.772 { 00:05:56.772 "method": "nvmf_set_crdt", 00:05:56.772 "params": { 00:05:56.772 "crdt1": 0, 00:05:56.772 "crdt2": 0, 00:05:56.772 "crdt3": 0 00:05:56.772 } 00:05:56.772 }, 00:05:56.772 { 00:05:56.772 "method": "nvmf_create_transport", 00:05:56.772 "params": { 00:05:56.772 "trtype": "TCP", 00:05:56.772 "max_queue_depth": 128, 00:05:56.772 "max_io_qpairs_per_ctrlr": 127, 00:05:56.772 "in_capsule_data_size": 4096, 00:05:56.772 "max_io_size": 131072, 00:05:56.772 "io_unit_size": 131072, 00:05:56.772 "max_aq_depth": 128, 00:05:56.772 "num_shared_buffers": 511, 00:05:56.772 "buf_cache_size": 4294967295, 00:05:56.772 "dif_insert_or_strip": false, 00:05:56.772 "zcopy": false, 00:05:56.772 "c2h_success": true, 00:05:56.772 "sock_priority": 0, 00:05:56.772 "abort_timeout_sec": 1, 00:05:56.772 "ack_timeout": 0, 00:05:56.772 "data_wr_pool_size": 0 00:05:56.772 } 00:05:56.772 } 00:05:56.772 ] 00:05:56.772 }, 00:05:56.772 { 00:05:56.772 "subsystem": "iscsi", 00:05:56.772 "config": [ 00:05:56.772 { 00:05:56.772 "method": "iscsi_set_options", 00:05:56.772 "params": { 00:05:56.772 "node_base": "iqn.2016-06.io.spdk", 00:05:56.772 "max_sessions": 128, 00:05:56.772 "max_connections_per_session": 2, 00:05:56.772 "max_queue_depth": 64, 00:05:56.772 "default_time2wait": 2, 00:05:56.772 "default_time2retain": 20, 00:05:56.772 "first_burst_length": 8192, 00:05:56.772 "immediate_data": true, 00:05:56.772 "allow_duplicated_isid": false, 00:05:56.772 "error_recovery_level": 0, 00:05:56.772 "nop_timeout": 60, 00:05:56.772 "nop_in_interval": 30, 00:05:56.772 "disable_chap": false, 00:05:56.772 "require_chap": false, 00:05:56.772 "mutual_chap": false, 00:05:56.772 "chap_group": 0, 00:05:56.772 "max_large_datain_per_connection": 64, 00:05:56.772 "max_r2t_per_connection": 4, 00:05:56.772 "pdu_pool_size": 36864, 00:05:56.772 "immediate_data_pool_size": 16384, 00:05:56.772 "data_out_pool_size": 2048 00:05:56.772 } 00:05:56.772 } 00:05:56.772 ] 00:05:56.772 } 00:05:56.772 ] 00:05:56.772 } 00:05:56.772 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:56.772 19:04:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 87547 00:05:56.772 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 87547 ']' 00:05:56.772 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 87547 00:05:56.772 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:56.772 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.772 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87547 00:05:56.772 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.772 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.772 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87547' 00:05:56.772 killing process with pid 87547 00:05:56.772 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 87547 00:05:56.772 19:04:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 87547 00:05:57.340 19:04:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=87688 00:05:57.340 19:04:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:57.340 19:04:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:02.606 19:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 87688 00:06:02.606 19:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 87688 ']' 00:06:02.606 19:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 87688 00:06:02.606 19:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:02.606 19:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.606 19:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87688 00:06:02.606 19:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.606 19:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.606 19:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87688' 00:06:02.606 killing process with pid 87688 00:06:02.606 19:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 87688 00:06:02.606 19:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 87688 00:06:02.865 19:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:02.865 19:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:02.865 00:06:02.865 real 0m6.545s 00:06:02.865 user 0m6.164s 00:06:02.865 sys 0m0.682s 00:06:02.865 19:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.865 19:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.865 ************************************ 00:06:02.865 END TEST skip_rpc_with_json 00:06:02.865 ************************************ 00:06:02.865 19:04:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:02.865 19:04:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.865 19:04:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.865 19:04:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.865 ************************************ 00:06:02.865 START TEST skip_rpc_with_delay 00:06:02.865 ************************************ 00:06:02.865 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:02.865 19:04:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:02.866 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:02.866 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:02.866 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.866 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.866 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.866 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.866 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.866 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.866 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.866 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:02.866 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:02.866 [2024-12-06 19:04:47.792639] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:02.866 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:02.866 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.866 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:02.866 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.866 00:06:02.866 real 0m0.075s 00:06:02.866 user 0m0.041s 00:06:02.866 sys 0m0.034s 00:06:02.866 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.866 19:04:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:02.866 ************************************ 00:06:02.866 END TEST skip_rpc_with_delay 00:06:02.866 ************************************ 00:06:02.866 19:04:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:02.866 19:04:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:02.866 19:04:47 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:02.866 19:04:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.866 19:04:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.866 19:04:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.866 ************************************ 00:06:02.866 START TEST exit_on_failed_rpc_init 00:06:02.866 ************************************ 00:06:02.866 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:02.866 19:04:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=88396 00:06:02.866 19:04:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.866 19:04:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 88396 00:06:02.866 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 88396 ']' 00:06:02.866 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.866 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.866 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.866 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.866 19:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.126 [2024-12-06 19:04:47.919114] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:03.126 [2024-12-06 19:04:47.919212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88396 ] 00:06:03.126 [2024-12-06 19:04:47.985376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.126 [2024-12-06 19:04:48.044847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.384 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.384 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:03.384 19:04:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.385 19:04:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.385 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:03.385 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.385 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.385 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.385 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.385 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.385 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.385 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.385 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.385 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:03.385 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.385 [2024-12-06 19:04:48.359675] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:03.385 [2024-12-06 19:04:48.359797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88532 ] 00:06:03.385 [2024-12-06 19:04:48.424991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.667 [2024-12-06 19:04:48.484885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.667 [2024-12-06 19:04:48.485028] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:03.667 [2024-12-06 19:04:48.485062] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:03.667 [2024-12-06 19:04:48.485073] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 88396 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 88396 ']' 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 88396 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88396 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88396' 00:06:03.667 killing process with pid 88396 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 88396 00:06:03.667 19:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 88396 00:06:04.235 00:06:04.235 real 0m1.159s 00:06:04.235 user 0m1.280s 00:06:04.235 sys 0m0.424s 00:06:04.235 19:04:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.235 19:04:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:04.235 ************************************ 00:06:04.235 END TEST exit_on_failed_rpc_init 00:06:04.235 ************************************ 00:06:04.235 19:04:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:04.235 00:06:04.235 real 0m13.595s 00:06:04.235 user 0m12.822s 00:06:04.235 sys 0m1.645s 00:06:04.235 19:04:49 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.235 19:04:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.235 ************************************ 00:06:04.235 END TEST skip_rpc 00:06:04.235 ************************************ 00:06:04.235 19:04:49 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:04.235 19:04:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.235 19:04:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.235 19:04:49 -- common/autotest_common.sh@10 -- # set +x 00:06:04.235 ************************************ 00:06:04.235 START TEST rpc_client 00:06:04.235 ************************************ 00:06:04.235 19:04:49 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:04.235 * Looking for test storage... 00:06:04.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:04.235 19:04:49 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:04.235 19:04:49 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:04.235 19:04:49 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:04.235 19:04:49 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.235 19:04:49 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:04.235 19:04:49 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.235 19:04:49 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:04.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.235 --rc genhtml_branch_coverage=1 00:06:04.235 --rc genhtml_function_coverage=1 00:06:04.235 --rc genhtml_legend=1 00:06:04.235 --rc geninfo_all_blocks=1 00:06:04.235 --rc geninfo_unexecuted_blocks=1 00:06:04.235 00:06:04.235 ' 00:06:04.235 19:04:49 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:04.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.235 --rc genhtml_branch_coverage=1 00:06:04.235 --rc genhtml_function_coverage=1 00:06:04.235 --rc genhtml_legend=1 00:06:04.235 --rc geninfo_all_blocks=1 00:06:04.235 --rc geninfo_unexecuted_blocks=1 00:06:04.235 00:06:04.235 ' 00:06:04.235 19:04:49 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:04.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.235 --rc genhtml_branch_coverage=1 00:06:04.235 --rc genhtml_function_coverage=1 00:06:04.235 --rc genhtml_legend=1 00:06:04.235 --rc geninfo_all_blocks=1 00:06:04.235 --rc geninfo_unexecuted_blocks=1 00:06:04.235 00:06:04.235 ' 00:06:04.235 19:04:49 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:04.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.235 --rc genhtml_branch_coverage=1 00:06:04.235 --rc genhtml_function_coverage=1 00:06:04.235 --rc genhtml_legend=1 00:06:04.235 --rc geninfo_all_blocks=1 00:06:04.235 --rc geninfo_unexecuted_blocks=1 00:06:04.235 00:06:04.235 ' 00:06:04.235 19:04:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:04.235 OK 00:06:04.235 19:04:49 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:04.235 00:06:04.235 real 0m0.152s 00:06:04.235 user 0m0.102s 00:06:04.235 sys 0m0.058s 00:06:04.235 19:04:49 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.235 19:04:49 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:04.235 ************************************ 00:06:04.235 END TEST rpc_client 00:06:04.235 ************************************ 00:06:04.235 19:04:49 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:04.235 19:04:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.235 19:04:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.235 19:04:49 -- common/autotest_common.sh@10 -- # set +x 00:06:04.495 ************************************ 00:06:04.495 START TEST json_config 00:06:04.495 ************************************ 00:06:04.495 19:04:49 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:04.495 19:04:49 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:04.495 19:04:49 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:04.495 19:04:49 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:04.495 19:04:49 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:04.495 19:04:49 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.495 19:04:49 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.495 19:04:49 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.495 19:04:49 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.495 19:04:49 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.495 19:04:49 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.495 19:04:49 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.495 19:04:49 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.495 19:04:49 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.495 19:04:49 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.495 19:04:49 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.495 19:04:49 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:04.495 19:04:49 json_config -- scripts/common.sh@345 -- # : 1 00:06:04.495 19:04:49 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.495 19:04:49 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.495 19:04:49 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:04.495 19:04:49 json_config -- scripts/common.sh@353 -- # local d=1 00:06:04.495 19:04:49 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.495 19:04:49 json_config -- scripts/common.sh@355 -- # echo 1 00:06:04.495 19:04:49 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.495 19:04:49 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:04.495 19:04:49 json_config -- scripts/common.sh@353 -- # local d=2 00:06:04.495 19:04:49 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.495 19:04:49 json_config -- scripts/common.sh@355 -- # echo 2 00:06:04.495 19:04:49 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.495 19:04:49 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.495 19:04:49 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.495 19:04:49 json_config -- scripts/common.sh@368 -- # return 0 00:06:04.495 19:04:49 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.495 19:04:49 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:04.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.495 --rc genhtml_branch_coverage=1 00:06:04.495 --rc genhtml_function_coverage=1 00:06:04.495 --rc genhtml_legend=1 00:06:04.495 --rc geninfo_all_blocks=1 00:06:04.495 --rc geninfo_unexecuted_blocks=1 00:06:04.495 00:06:04.495 ' 00:06:04.495 19:04:49 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:04.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.495 --rc genhtml_branch_coverage=1 00:06:04.495 --rc genhtml_function_coverage=1 00:06:04.495 --rc genhtml_legend=1 00:06:04.495 --rc geninfo_all_blocks=1 00:06:04.495 --rc geninfo_unexecuted_blocks=1 00:06:04.495 00:06:04.495 ' 00:06:04.495 19:04:49 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:04.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.495 --rc genhtml_branch_coverage=1 00:06:04.495 --rc genhtml_function_coverage=1 00:06:04.495 --rc genhtml_legend=1 00:06:04.495 --rc geninfo_all_blocks=1 00:06:04.495 --rc geninfo_unexecuted_blocks=1 00:06:04.495 00:06:04.495 ' 00:06:04.495 19:04:49 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:04.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.495 --rc genhtml_branch_coverage=1 00:06:04.495 --rc genhtml_function_coverage=1 00:06:04.495 --rc genhtml_legend=1 00:06:04.495 --rc geninfo_all_blocks=1 00:06:04.495 --rc geninfo_unexecuted_blocks=1 00:06:04.495 00:06:04.495 ' 00:06:04.495 19:04:49 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:04.495 19:04:49 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:04.495 19:04:49 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.495 19:04:49 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.495 19:04:49 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.495 19:04:49 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.495 19:04:49 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.495 19:04:49 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.495 19:04:49 json_config -- paths/export.sh@5 -- # export PATH 00:06:04.495 19:04:49 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.495 19:04:49 json_config -- nvmf/common.sh@51 -- # : 0 00:06:04.496 19:04:49 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:04.496 19:04:49 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:04.496 19:04:49 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.496 19:04:49 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.496 19:04:49 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.496 19:04:49 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:04.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:04.496 19:04:49 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:04.496 19:04:49 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:04.496 19:04:49 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:04.496 INFO: JSON configuration test init 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:04.496 19:04:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.496 19:04:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:04.496 19:04:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.496 19:04:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.496 19:04:49 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:04.496 19:04:49 json_config -- json_config/common.sh@9 -- # local app=target 00:06:04.496 19:04:49 json_config -- json_config/common.sh@10 -- # shift 00:06:04.496 19:04:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:04.496 19:04:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:04.496 19:04:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:04.496 19:04:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.496 19:04:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.496 19:04:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=88790 00:06:04.496 19:04:49 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:04.496 19:04:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:04.496 Waiting for target to run... 00:06:04.496 19:04:49 json_config -- json_config/common.sh@25 -- # waitforlisten 88790 /var/tmp/spdk_tgt.sock 00:06:04.496 19:04:49 json_config -- common/autotest_common.sh@835 -- # '[' -z 88790 ']' 00:06:04.496 19:04:49 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:04.496 19:04:49 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.496 19:04:49 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:04.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:04.496 19:04:49 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.496 19:04:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.496 [2024-12-06 19:04:49.501576] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:04.496 [2024-12-06 19:04:49.501677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88790 ] 00:06:05.064 [2024-12-06 19:04:49.834217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.064 [2024-12-06 19:04:49.876512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.630 19:04:50 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.630 19:04:50 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:05.630 19:04:50 json_config -- json_config/common.sh@26 -- # echo '' 00:06:05.630 00:06:05.630 19:04:50 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:05.630 19:04:50 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:05.630 19:04:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.630 19:04:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.630 19:04:50 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:05.630 19:04:50 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:05.630 19:04:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.630 19:04:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.630 19:04:50 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:05.630 19:04:50 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:05.630 19:04:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:08.916 19:04:53 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:08.916 19:04:53 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:08.916 19:04:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.916 19:04:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.916 19:04:53 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:08.916 19:04:53 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:08.916 19:04:53 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:08.916 19:04:53 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:08.916 19:04:53 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:08.916 19:04:53 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:08.916 19:04:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:08.916 19:04:53 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:09.175 19:04:53 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:09.175 19:04:53 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:09.175 19:04:53 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:09.175 19:04:53 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:09.175 19:04:53 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:09.175 19:04:53 json_config -- json_config/json_config.sh@54 -- # sort 00:06:09.175 19:04:53 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:09.175 19:04:53 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:09.175 19:04:53 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:09.175 19:04:53 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:09.175 19:04:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.175 19:04:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.175 19:04:53 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:09.175 19:04:54 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:09.175 19:04:54 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:09.175 19:04:54 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:09.175 19:04:54 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:09.175 19:04:54 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:09.175 19:04:54 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:09.175 19:04:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:09.175 19:04:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.175 19:04:54 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:09.175 19:04:54 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:09.175 19:04:54 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:09.175 19:04:54 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:09.175 19:04:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:09.433 MallocForNvmf0 00:06:09.433 19:04:54 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:09.433 19:04:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:09.692 MallocForNvmf1 00:06:09.692 19:04:54 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:09.692 19:04:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:09.951 [2024-12-06 19:04:54.774671] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.951 19:04:54 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:09.951 19:04:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:10.214 19:04:55 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:10.214 19:04:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:10.476 19:04:55 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:10.476 19:04:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:10.734 19:04:55 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:10.734 19:04:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:10.993 [2024-12-06 19:04:55.830100] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:10.993 19:04:55 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:10.993 19:04:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.993 19:04:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.993 19:04:55 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:10.993 19:04:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.993 19:04:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.993 19:04:55 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:10.993 19:04:55 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:10.993 19:04:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:11.251 MallocBdevForConfigChangeCheck 00:06:11.251 19:04:56 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:11.251 19:04:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:11.251 19:04:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.251 19:04:56 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:11.251 19:04:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:11.817 19:04:56 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:11.817 INFO: shutting down applications... 00:06:11.817 19:04:56 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:11.817 19:04:56 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:11.817 19:04:56 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:11.817 19:04:56 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:13.717 Calling clear_iscsi_subsystem 00:06:13.717 Calling clear_nvmf_subsystem 00:06:13.717 Calling clear_nbd_subsystem 00:06:13.717 Calling clear_ublk_subsystem 00:06:13.717 Calling clear_vhost_blk_subsystem 00:06:13.717 Calling clear_vhost_scsi_subsystem 00:06:13.717 Calling clear_bdev_subsystem 00:06:13.717 19:04:58 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:13.717 19:04:58 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:13.717 19:04:58 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:13.717 19:04:58 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:13.717 19:04:58 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:13.717 19:04:58 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:13.717 19:04:58 json_config -- json_config/json_config.sh@352 -- # break 00:06:13.717 19:04:58 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:13.717 19:04:58 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:13.717 19:04:58 json_config -- json_config/common.sh@31 -- # local app=target 00:06:13.718 19:04:58 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:13.718 19:04:58 json_config -- json_config/common.sh@35 -- # [[ -n 88790 ]] 00:06:13.718 19:04:58 json_config -- json_config/common.sh@38 -- # kill -SIGINT 88790 00:06:13.718 19:04:58 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:13.718 19:04:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.718 19:04:58 json_config -- json_config/common.sh@41 -- # kill -0 88790 00:06:13.718 19:04:58 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:14.289 19:04:59 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:14.289 19:04:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.289 19:04:59 json_config -- json_config/common.sh@41 -- # kill -0 88790 00:06:14.289 19:04:59 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:14.289 19:04:59 json_config -- json_config/common.sh@43 -- # break 00:06:14.289 19:04:59 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:14.289 19:04:59 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:14.289 SPDK target shutdown done 00:06:14.289 19:04:59 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:14.289 INFO: relaunching applications... 00:06:14.289 19:04:59 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.289 19:04:59 json_config -- json_config/common.sh@9 -- # local app=target 00:06:14.289 19:04:59 json_config -- json_config/common.sh@10 -- # shift 00:06:14.289 19:04:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:14.289 19:04:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:14.289 19:04:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:14.289 19:04:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.289 19:04:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.289 19:04:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=89993 00:06:14.289 19:04:59 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.289 19:04:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:14.289 Waiting for target to run... 00:06:14.289 19:04:59 json_config -- json_config/common.sh@25 -- # waitforlisten 89993 /var/tmp/spdk_tgt.sock 00:06:14.289 19:04:59 json_config -- common/autotest_common.sh@835 -- # '[' -z 89993 ']' 00:06:14.289 19:04:59 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:14.289 19:04:59 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.289 19:04:59 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:14.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:14.289 19:04:59 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.289 19:04:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.289 [2024-12-06 19:04:59.261245] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:14.289 [2024-12-06 19:04:59.261345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89993 ] 00:06:14.859 [2024-12-06 19:04:59.789579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.859 [2024-12-06 19:04:59.842937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.157 [2024-12-06 19:05:02.901543] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.157 [2024-12-06 19:05:02.934078] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:18.157 19:05:02 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.157 19:05:02 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:18.157 19:05:02 json_config -- json_config/common.sh@26 -- # echo '' 00:06:18.157 00:06:18.157 19:05:02 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:18.157 19:05:02 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:18.157 INFO: Checking if target configuration is the same... 00:06:18.157 19:05:02 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.157 19:05:02 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:18.157 19:05:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:18.157 + '[' 2 -ne 2 ']' 00:06:18.157 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:18.157 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:18.157 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:18.157 +++ basename /dev/fd/62 00:06:18.157 ++ mktemp /tmp/62.XXX 00:06:18.157 + tmp_file_1=/tmp/62.Wof 00:06:18.157 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.157 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:18.157 + tmp_file_2=/tmp/spdk_tgt_config.json.5pu 00:06:18.157 + ret=0 00:06:18.157 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:18.415 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:18.415 + diff -u /tmp/62.Wof /tmp/spdk_tgt_config.json.5pu 00:06:18.415 + echo 'INFO: JSON config files are the same' 00:06:18.415 INFO: JSON config files are the same 00:06:18.415 + rm /tmp/62.Wof /tmp/spdk_tgt_config.json.5pu 00:06:18.416 + exit 0 00:06:18.416 19:05:03 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:18.416 19:05:03 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:18.416 INFO: changing configuration and checking if this can be detected... 00:06:18.416 19:05:03 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:18.416 19:05:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:18.674 19:05:03 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.674 19:05:03 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:18.674 19:05:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:18.674 + '[' 2 -ne 2 ']' 00:06:18.674 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:18.674 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:18.674 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:18.674 +++ basename /dev/fd/62 00:06:18.674 ++ mktemp /tmp/62.XXX 00:06:18.674 + tmp_file_1=/tmp/62.eip 00:06:18.674 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:18.674 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:18.674 + tmp_file_2=/tmp/spdk_tgt_config.json.hIQ 00:06:18.674 + ret=0 00:06:18.674 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:19.243 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:19.243 + diff -u /tmp/62.eip /tmp/spdk_tgt_config.json.hIQ 00:06:19.243 + ret=1 00:06:19.243 + echo '=== Start of file: /tmp/62.eip ===' 00:06:19.243 + cat /tmp/62.eip 00:06:19.243 + echo '=== End of file: /tmp/62.eip ===' 00:06:19.243 + echo '' 00:06:19.243 + echo '=== Start of file: /tmp/spdk_tgt_config.json.hIQ ===' 00:06:19.243 + cat /tmp/spdk_tgt_config.json.hIQ 00:06:19.243 + echo '=== End of file: /tmp/spdk_tgt_config.json.hIQ ===' 00:06:19.243 + echo '' 00:06:19.243 + rm /tmp/62.eip /tmp/spdk_tgt_config.json.hIQ 00:06:19.243 + exit 1 00:06:19.243 19:05:04 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:19.243 INFO: configuration change detected. 00:06:19.243 19:05:04 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:19.243 19:05:04 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:19.243 19:05:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:19.243 19:05:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.243 19:05:04 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:19.243 19:05:04 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:19.243 19:05:04 json_config -- json_config/json_config.sh@324 -- # [[ -n 89993 ]] 00:06:19.243 19:05:04 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:19.243 19:05:04 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:19.243 19:05:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:19.243 19:05:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.243 19:05:04 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:19.243 19:05:04 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:19.243 19:05:04 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:19.243 19:05:04 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:19.243 19:05:04 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:19.243 19:05:04 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:19.243 19:05:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.243 19:05:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.243 19:05:04 json_config -- json_config/json_config.sh@330 -- # killprocess 89993 00:06:19.243 19:05:04 json_config -- common/autotest_common.sh@954 -- # '[' -z 89993 ']' 00:06:19.243 19:05:04 json_config -- common/autotest_common.sh@958 -- # kill -0 89993 00:06:19.243 19:05:04 json_config -- common/autotest_common.sh@959 -- # uname 00:06:19.243 19:05:04 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.243 19:05:04 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89993 00:06:19.243 19:05:04 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.243 19:05:04 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.243 19:05:04 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89993' 00:06:19.243 killing process with pid 89993 00:06:19.243 19:05:04 json_config -- common/autotest_common.sh@973 -- # kill 89993 00:06:19.243 19:05:04 json_config -- common/autotest_common.sh@978 -- # wait 89993 00:06:21.140 19:05:05 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.140 19:05:05 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:21.140 19:05:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:21.140 19:05:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.140 19:05:05 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:21.140 19:05:05 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:21.140 INFO: Success 00:06:21.140 00:06:21.140 real 0m16.574s 00:06:21.140 user 0m18.163s 00:06:21.140 sys 0m2.647s 00:06:21.140 19:05:05 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.140 19:05:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.140 ************************************ 00:06:21.140 END TEST json_config 00:06:21.140 ************************************ 00:06:21.140 19:05:05 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:21.140 19:05:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.140 19:05:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.140 19:05:05 -- common/autotest_common.sh@10 -- # set +x 00:06:21.140 ************************************ 00:06:21.140 START TEST json_config_extra_key 00:06:21.140 ************************************ 00:06:21.140 19:05:05 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:21.140 19:05:05 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:21.140 19:05:05 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:21.140 19:05:05 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:21.140 19:05:06 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.140 19:05:06 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:21.140 19:05:06 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.140 19:05:06 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:21.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.140 --rc genhtml_branch_coverage=1 00:06:21.140 --rc genhtml_function_coverage=1 00:06:21.140 --rc genhtml_legend=1 00:06:21.140 --rc geninfo_all_blocks=1 00:06:21.140 --rc geninfo_unexecuted_blocks=1 00:06:21.140 00:06:21.140 ' 00:06:21.140 19:05:06 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:21.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.140 --rc genhtml_branch_coverage=1 00:06:21.140 --rc genhtml_function_coverage=1 00:06:21.140 --rc genhtml_legend=1 00:06:21.140 --rc geninfo_all_blocks=1 00:06:21.140 --rc geninfo_unexecuted_blocks=1 00:06:21.140 00:06:21.140 ' 00:06:21.140 19:05:06 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:21.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.140 --rc genhtml_branch_coverage=1 00:06:21.140 --rc genhtml_function_coverage=1 00:06:21.140 --rc genhtml_legend=1 00:06:21.140 --rc geninfo_all_blocks=1 00:06:21.140 --rc geninfo_unexecuted_blocks=1 00:06:21.140 00:06:21.140 ' 00:06:21.140 19:05:06 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:21.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.140 --rc genhtml_branch_coverage=1 00:06:21.140 --rc genhtml_function_coverage=1 00:06:21.140 --rc genhtml_legend=1 00:06:21.140 --rc geninfo_all_blocks=1 00:06:21.140 --rc geninfo_unexecuted_blocks=1 00:06:21.140 00:06:21.140 ' 00:06:21.140 19:05:06 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:21.140 19:05:06 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:21.140 19:05:06 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.140 19:05:06 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.140 19:05:06 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.140 19:05:06 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.140 19:05:06 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.140 19:05:06 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.140 19:05:06 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:21.141 19:05:06 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:21.141 19:05:06 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.141 19:05:06 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.141 19:05:06 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.141 19:05:06 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.141 19:05:06 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.141 19:05:06 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.141 19:05:06 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:21.141 19:05:06 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:21.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:21.141 19:05:06 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:21.141 19:05:06 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:21.141 19:05:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:21.141 19:05:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:21.141 19:05:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:21.141 19:05:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:21.141 19:05:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:21.141 19:05:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:21.141 19:05:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:21.141 19:05:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:21.141 19:05:06 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:21.141 19:05:06 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:21.141 INFO: launching applications... 00:06:21.141 19:05:06 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:21.141 19:05:06 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:21.141 19:05:06 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:21.141 19:05:06 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:21.141 19:05:06 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:21.141 19:05:06 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:21.141 19:05:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.141 19:05:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.141 19:05:06 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=90910 00:06:21.141 19:05:06 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:21.141 19:05:06 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:21.141 Waiting for target to run... 00:06:21.141 19:05:06 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 90910 /var/tmp/spdk_tgt.sock 00:06:21.141 19:05:06 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 90910 ']' 00:06:21.141 19:05:06 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:21.141 19:05:06 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.141 19:05:06 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:21.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:21.141 19:05:06 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.141 19:05:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:21.141 [2024-12-06 19:05:06.132938] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:21.141 [2024-12-06 19:05:06.133056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90910 ] 00:06:21.708 [2024-12-06 19:05:06.485632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.708 [2024-12-06 19:05:06.527740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.275 19:05:07 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.275 19:05:07 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:22.275 19:05:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:22.275 00:06:22.275 19:05:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:22.275 INFO: shutting down applications... 00:06:22.275 19:05:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:22.275 19:05:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:22.275 19:05:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:22.275 19:05:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 90910 ]] 00:06:22.275 19:05:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 90910 00:06:22.275 19:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:22.275 19:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.275 19:05:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 90910 00:06:22.275 19:05:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:22.845 19:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:22.845 19:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.845 19:05:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 90910 00:06:22.845 19:05:07 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:22.845 19:05:07 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:22.846 19:05:07 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:22.846 19:05:07 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:22.846 SPDK target shutdown done 00:06:22.846 19:05:07 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:22.846 Success 00:06:22.846 00:06:22.846 real 0m1.717s 00:06:22.846 user 0m1.727s 00:06:22.846 sys 0m0.462s 00:06:22.846 19:05:07 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.846 19:05:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:22.846 ************************************ 00:06:22.846 END TEST json_config_extra_key 00:06:22.846 ************************************ 00:06:22.846 19:05:07 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:22.846 19:05:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.846 19:05:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.846 19:05:07 -- common/autotest_common.sh@10 -- # set +x 00:06:22.846 ************************************ 00:06:22.846 START TEST alias_rpc 00:06:22.846 ************************************ 00:06:22.846 19:05:07 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:22.846 * Looking for test storage... 00:06:22.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:22.846 19:05:07 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:22.846 19:05:07 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:22.846 19:05:07 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:22.846 19:05:07 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.846 19:05:07 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:22.846 19:05:07 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.846 19:05:07 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:22.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.846 --rc genhtml_branch_coverage=1 00:06:22.846 --rc genhtml_function_coverage=1 00:06:22.846 --rc genhtml_legend=1 00:06:22.846 --rc geninfo_all_blocks=1 00:06:22.846 --rc geninfo_unexecuted_blocks=1 00:06:22.846 00:06:22.846 ' 00:06:22.846 19:05:07 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:22.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.846 --rc genhtml_branch_coverage=1 00:06:22.846 --rc genhtml_function_coverage=1 00:06:22.846 --rc genhtml_legend=1 00:06:22.846 --rc geninfo_all_blocks=1 00:06:22.846 --rc geninfo_unexecuted_blocks=1 00:06:22.846 00:06:22.846 ' 00:06:22.846 19:05:07 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:22.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.846 --rc genhtml_branch_coverage=1 00:06:22.846 --rc genhtml_function_coverage=1 00:06:22.846 --rc genhtml_legend=1 00:06:22.846 --rc geninfo_all_blocks=1 00:06:22.846 --rc geninfo_unexecuted_blocks=1 00:06:22.846 00:06:22.846 ' 00:06:22.846 19:05:07 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:22.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.846 --rc genhtml_branch_coverage=1 00:06:22.846 --rc genhtml_function_coverage=1 00:06:22.846 --rc genhtml_legend=1 00:06:22.846 --rc geninfo_all_blocks=1 00:06:22.846 --rc geninfo_unexecuted_blocks=1 00:06:22.846 00:06:22.846 ' 00:06:22.846 19:05:07 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:22.846 19:05:07 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=91231 00:06:22.846 19:05:07 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.846 19:05:07 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 91231 00:06:22.846 19:05:07 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 91231 ']' 00:06:22.846 19:05:07 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.846 19:05:07 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.846 19:05:07 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.846 19:05:07 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.846 19:05:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.107 [2024-12-06 19:05:07.897095] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:23.107 [2024-12-06 19:05:07.897180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91231 ] 00:06:23.107 [2024-12-06 19:05:07.964852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.107 [2024-12-06 19:05:08.020146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.366 19:05:08 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.366 19:05:08 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.366 19:05:08 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:23.624 19:05:08 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 91231 00:06:23.624 19:05:08 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 91231 ']' 00:06:23.624 19:05:08 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 91231 00:06:23.624 19:05:08 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:23.624 19:05:08 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.624 19:05:08 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91231 00:06:23.624 19:05:08 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.624 19:05:08 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.624 19:05:08 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91231' 00:06:23.624 killing process with pid 91231 00:06:23.624 19:05:08 alias_rpc -- common/autotest_common.sh@973 -- # kill 91231 00:06:23.624 19:05:08 alias_rpc -- common/autotest_common.sh@978 -- # wait 91231 00:06:24.189 00:06:24.189 real 0m1.338s 00:06:24.189 user 0m1.445s 00:06:24.189 sys 0m0.442s 00:06:24.189 19:05:09 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.189 19:05:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.189 ************************************ 00:06:24.189 END TEST alias_rpc 00:06:24.189 ************************************ 00:06:24.189 19:05:09 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:24.189 19:05:09 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:24.189 19:05:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.189 19:05:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.189 19:05:09 -- common/autotest_common.sh@10 -- # set +x 00:06:24.189 ************************************ 00:06:24.189 START TEST spdkcli_tcp 00:06:24.189 ************************************ 00:06:24.189 19:05:09 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:24.189 * Looking for test storage... 00:06:24.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:24.190 19:05:09 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.190 19:05:09 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.190 19:05:09 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.190 19:05:09 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.190 19:05:09 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:24.190 19:05:09 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.190 19:05:09 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.190 --rc genhtml_branch_coverage=1 00:06:24.190 --rc genhtml_function_coverage=1 00:06:24.190 --rc genhtml_legend=1 00:06:24.190 --rc geninfo_all_blocks=1 00:06:24.190 --rc geninfo_unexecuted_blocks=1 00:06:24.190 00:06:24.190 ' 00:06:24.190 19:05:09 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.190 --rc genhtml_branch_coverage=1 00:06:24.190 --rc genhtml_function_coverage=1 00:06:24.190 --rc genhtml_legend=1 00:06:24.190 --rc geninfo_all_blocks=1 00:06:24.190 --rc geninfo_unexecuted_blocks=1 00:06:24.190 00:06:24.190 ' 00:06:24.190 19:05:09 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.190 --rc genhtml_branch_coverage=1 00:06:24.190 --rc genhtml_function_coverage=1 00:06:24.190 --rc genhtml_legend=1 00:06:24.190 --rc geninfo_all_blocks=1 00:06:24.190 --rc geninfo_unexecuted_blocks=1 00:06:24.190 00:06:24.190 ' 00:06:24.190 19:05:09 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.190 --rc genhtml_branch_coverage=1 00:06:24.190 --rc genhtml_function_coverage=1 00:06:24.190 --rc genhtml_legend=1 00:06:24.190 --rc geninfo_all_blocks=1 00:06:24.190 --rc geninfo_unexecuted_blocks=1 00:06:24.190 00:06:24.190 ' 00:06:24.190 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:24.190 19:05:09 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:24.190 19:05:09 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:24.190 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:24.190 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:24.190 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:24.190 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:24.190 19:05:09 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.190 19:05:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.190 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=91432 00:06:24.190 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:24.190 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 91432 00:06:24.190 19:05:09 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 91432 ']' 00:06:24.190 19:05:09 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.190 19:05:09 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.190 19:05:09 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.190 19:05:09 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.190 19:05:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.448 [2024-12-06 19:05:09.285209] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:24.448 [2024-12-06 19:05:09.285285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91432 ] 00:06:24.448 [2024-12-06 19:05:09.349383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.448 [2024-12-06 19:05:09.407549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.448 [2024-12-06 19:05:09.407551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.707 19:05:09 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.707 19:05:09 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:24.707 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=91503 00:06:24.707 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:24.707 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:24.965 [ 00:06:24.965 "bdev_malloc_delete", 00:06:24.965 "bdev_malloc_create", 00:06:24.965 "bdev_null_resize", 00:06:24.965 "bdev_null_delete", 00:06:24.965 "bdev_null_create", 00:06:24.965 "bdev_nvme_cuse_unregister", 00:06:24.965 "bdev_nvme_cuse_register", 00:06:24.965 "bdev_opal_new_user", 00:06:24.965 "bdev_opal_set_lock_state", 00:06:24.965 "bdev_opal_delete", 00:06:24.965 "bdev_opal_get_info", 00:06:24.965 "bdev_opal_create", 00:06:24.965 "bdev_nvme_opal_revert", 00:06:24.965 "bdev_nvme_opal_init", 00:06:24.965 "bdev_nvme_send_cmd", 00:06:24.965 "bdev_nvme_set_keys", 00:06:24.965 "bdev_nvme_get_path_iostat", 00:06:24.965 "bdev_nvme_get_mdns_discovery_info", 00:06:24.965 "bdev_nvme_stop_mdns_discovery", 00:06:24.965 "bdev_nvme_start_mdns_discovery", 00:06:24.965 "bdev_nvme_set_multipath_policy", 00:06:24.965 "bdev_nvme_set_preferred_path", 00:06:24.965 "bdev_nvme_get_io_paths", 00:06:24.965 "bdev_nvme_remove_error_injection", 00:06:24.965 "bdev_nvme_add_error_injection", 00:06:24.965 "bdev_nvme_get_discovery_info", 00:06:24.965 "bdev_nvme_stop_discovery", 00:06:24.965 "bdev_nvme_start_discovery", 00:06:24.965 "bdev_nvme_get_controller_health_info", 00:06:24.965 "bdev_nvme_disable_controller", 00:06:24.965 "bdev_nvme_enable_controller", 00:06:24.965 "bdev_nvme_reset_controller", 00:06:24.965 "bdev_nvme_get_transport_statistics", 00:06:24.965 "bdev_nvme_apply_firmware", 00:06:24.965 "bdev_nvme_detach_controller", 00:06:24.965 "bdev_nvme_get_controllers", 00:06:24.965 "bdev_nvme_attach_controller", 00:06:24.966 "bdev_nvme_set_hotplug", 00:06:24.966 "bdev_nvme_set_options", 00:06:24.966 "bdev_passthru_delete", 00:06:24.966 "bdev_passthru_create", 00:06:24.966 "bdev_lvol_set_parent_bdev", 00:06:24.966 "bdev_lvol_set_parent", 00:06:24.966 "bdev_lvol_check_shallow_copy", 00:06:24.966 "bdev_lvol_start_shallow_copy", 00:06:24.966 "bdev_lvol_grow_lvstore", 00:06:24.966 "bdev_lvol_get_lvols", 00:06:24.966 "bdev_lvol_get_lvstores", 00:06:24.966 "bdev_lvol_delete", 00:06:24.966 "bdev_lvol_set_read_only", 00:06:24.966 "bdev_lvol_resize", 00:06:24.966 "bdev_lvol_decouple_parent", 00:06:24.966 "bdev_lvol_inflate", 00:06:24.966 "bdev_lvol_rename", 00:06:24.966 "bdev_lvol_clone_bdev", 00:06:24.966 "bdev_lvol_clone", 00:06:24.966 "bdev_lvol_snapshot", 00:06:24.966 "bdev_lvol_create", 00:06:24.966 "bdev_lvol_delete_lvstore", 00:06:24.966 "bdev_lvol_rename_lvstore", 00:06:24.966 "bdev_lvol_create_lvstore", 00:06:24.966 "bdev_raid_set_options", 00:06:24.966 "bdev_raid_remove_base_bdev", 00:06:24.966 "bdev_raid_add_base_bdev", 00:06:24.966 "bdev_raid_delete", 00:06:24.966 "bdev_raid_create", 00:06:24.966 "bdev_raid_get_bdevs", 00:06:24.966 "bdev_error_inject_error", 00:06:24.966 "bdev_error_delete", 00:06:24.966 "bdev_error_create", 00:06:24.966 "bdev_split_delete", 00:06:24.966 "bdev_split_create", 00:06:24.966 "bdev_delay_delete", 00:06:24.966 "bdev_delay_create", 00:06:24.966 "bdev_delay_update_latency", 00:06:24.966 "bdev_zone_block_delete", 00:06:24.966 "bdev_zone_block_create", 00:06:24.966 "blobfs_create", 00:06:24.966 "blobfs_detect", 00:06:24.966 "blobfs_set_cache_size", 00:06:24.966 "bdev_aio_delete", 00:06:24.966 "bdev_aio_rescan", 00:06:24.966 "bdev_aio_create", 00:06:24.966 "bdev_ftl_set_property", 00:06:24.966 "bdev_ftl_get_properties", 00:06:24.966 "bdev_ftl_get_stats", 00:06:24.966 "bdev_ftl_unmap", 00:06:24.966 "bdev_ftl_unload", 00:06:24.966 "bdev_ftl_delete", 00:06:24.966 "bdev_ftl_load", 00:06:24.966 "bdev_ftl_create", 00:06:24.966 "bdev_virtio_attach_controller", 00:06:24.966 "bdev_virtio_scsi_get_devices", 00:06:24.966 "bdev_virtio_detach_controller", 00:06:24.966 "bdev_virtio_blk_set_hotplug", 00:06:24.966 "bdev_iscsi_delete", 00:06:24.966 "bdev_iscsi_create", 00:06:24.966 "bdev_iscsi_set_options", 00:06:24.966 "accel_error_inject_error", 00:06:24.966 "ioat_scan_accel_module", 00:06:24.966 "dsa_scan_accel_module", 00:06:24.966 "iaa_scan_accel_module", 00:06:24.966 "vfu_virtio_create_fs_endpoint", 00:06:24.966 "vfu_virtio_create_scsi_endpoint", 00:06:24.966 "vfu_virtio_scsi_remove_target", 00:06:24.966 "vfu_virtio_scsi_add_target", 00:06:24.966 "vfu_virtio_create_blk_endpoint", 00:06:24.966 "vfu_virtio_delete_endpoint", 00:06:24.966 "keyring_file_remove_key", 00:06:24.966 "keyring_file_add_key", 00:06:24.966 "keyring_linux_set_options", 00:06:24.966 "fsdev_aio_delete", 00:06:24.966 "fsdev_aio_create", 00:06:24.966 "iscsi_get_histogram", 00:06:24.966 "iscsi_enable_histogram", 00:06:24.966 "iscsi_set_options", 00:06:24.966 "iscsi_get_auth_groups", 00:06:24.966 "iscsi_auth_group_remove_secret", 00:06:24.966 "iscsi_auth_group_add_secret", 00:06:24.966 "iscsi_delete_auth_group", 00:06:24.966 "iscsi_create_auth_group", 00:06:24.966 "iscsi_set_discovery_auth", 00:06:24.966 "iscsi_get_options", 00:06:24.966 "iscsi_target_node_request_logout", 00:06:24.966 "iscsi_target_node_set_redirect", 00:06:24.966 "iscsi_target_node_set_auth", 00:06:24.966 "iscsi_target_node_add_lun", 00:06:24.966 "iscsi_get_stats", 00:06:24.966 "iscsi_get_connections", 00:06:24.966 "iscsi_portal_group_set_auth", 00:06:24.966 "iscsi_start_portal_group", 00:06:24.966 "iscsi_delete_portal_group", 00:06:24.966 "iscsi_create_portal_group", 00:06:24.966 "iscsi_get_portal_groups", 00:06:24.966 "iscsi_delete_target_node", 00:06:24.966 "iscsi_target_node_remove_pg_ig_maps", 00:06:24.966 "iscsi_target_node_add_pg_ig_maps", 00:06:24.966 "iscsi_create_target_node", 00:06:24.966 "iscsi_get_target_nodes", 00:06:24.966 "iscsi_delete_initiator_group", 00:06:24.966 "iscsi_initiator_group_remove_initiators", 00:06:24.966 "iscsi_initiator_group_add_initiators", 00:06:24.966 "iscsi_create_initiator_group", 00:06:24.966 "iscsi_get_initiator_groups", 00:06:24.966 "nvmf_set_crdt", 00:06:24.966 "nvmf_set_config", 00:06:24.966 "nvmf_set_max_subsystems", 00:06:24.966 "nvmf_stop_mdns_prr", 00:06:24.966 "nvmf_publish_mdns_prr", 00:06:24.966 "nvmf_subsystem_get_listeners", 00:06:24.966 "nvmf_subsystem_get_qpairs", 00:06:24.966 "nvmf_subsystem_get_controllers", 00:06:24.966 "nvmf_get_stats", 00:06:24.966 "nvmf_get_transports", 00:06:24.966 "nvmf_create_transport", 00:06:24.966 "nvmf_get_targets", 00:06:24.966 "nvmf_delete_target", 00:06:24.966 "nvmf_create_target", 00:06:24.966 "nvmf_subsystem_allow_any_host", 00:06:24.966 "nvmf_subsystem_set_keys", 00:06:24.966 "nvmf_subsystem_remove_host", 00:06:24.966 "nvmf_subsystem_add_host", 00:06:24.966 "nvmf_ns_remove_host", 00:06:24.966 "nvmf_ns_add_host", 00:06:24.966 "nvmf_subsystem_remove_ns", 00:06:24.966 "nvmf_subsystem_set_ns_ana_group", 00:06:24.966 "nvmf_subsystem_add_ns", 00:06:24.966 "nvmf_subsystem_listener_set_ana_state", 00:06:24.966 "nvmf_discovery_get_referrals", 00:06:24.966 "nvmf_discovery_remove_referral", 00:06:24.966 "nvmf_discovery_add_referral", 00:06:24.966 "nvmf_subsystem_remove_listener", 00:06:24.966 "nvmf_subsystem_add_listener", 00:06:24.966 "nvmf_delete_subsystem", 00:06:24.966 "nvmf_create_subsystem", 00:06:24.966 "nvmf_get_subsystems", 00:06:24.966 "env_dpdk_get_mem_stats", 00:06:24.966 "nbd_get_disks", 00:06:24.966 "nbd_stop_disk", 00:06:24.966 "nbd_start_disk", 00:06:24.966 "ublk_recover_disk", 00:06:24.966 "ublk_get_disks", 00:06:24.966 "ublk_stop_disk", 00:06:24.966 "ublk_start_disk", 00:06:24.966 "ublk_destroy_target", 00:06:24.966 "ublk_create_target", 00:06:24.966 "virtio_blk_create_transport", 00:06:24.966 "virtio_blk_get_transports", 00:06:24.966 "vhost_controller_set_coalescing", 00:06:24.966 "vhost_get_controllers", 00:06:24.966 "vhost_delete_controller", 00:06:24.966 "vhost_create_blk_controller", 00:06:24.966 "vhost_scsi_controller_remove_target", 00:06:24.966 "vhost_scsi_controller_add_target", 00:06:24.966 "vhost_start_scsi_controller", 00:06:24.966 "vhost_create_scsi_controller", 00:06:24.966 "thread_set_cpumask", 00:06:24.966 "scheduler_set_options", 00:06:24.966 "framework_get_governor", 00:06:24.966 "framework_get_scheduler", 00:06:24.966 "framework_set_scheduler", 00:06:24.966 "framework_get_reactors", 00:06:24.966 "thread_get_io_channels", 00:06:24.966 "thread_get_pollers", 00:06:24.966 "thread_get_stats", 00:06:24.966 "framework_monitor_context_switch", 00:06:24.966 "spdk_kill_instance", 00:06:24.966 "log_enable_timestamps", 00:06:24.966 "log_get_flags", 00:06:24.966 "log_clear_flag", 00:06:24.966 "log_set_flag", 00:06:24.966 "log_get_level", 00:06:24.966 "log_set_level", 00:06:24.966 "log_get_print_level", 00:06:24.966 "log_set_print_level", 00:06:24.966 "framework_enable_cpumask_locks", 00:06:24.966 "framework_disable_cpumask_locks", 00:06:24.966 "framework_wait_init", 00:06:24.966 "framework_start_init", 00:06:24.966 "scsi_get_devices", 00:06:24.966 "bdev_get_histogram", 00:06:24.966 "bdev_enable_histogram", 00:06:24.966 "bdev_set_qos_limit", 00:06:24.966 "bdev_set_qd_sampling_period", 00:06:24.966 "bdev_get_bdevs", 00:06:24.966 "bdev_reset_iostat", 00:06:24.966 "bdev_get_iostat", 00:06:24.966 "bdev_examine", 00:06:24.966 "bdev_wait_for_examine", 00:06:24.966 "bdev_set_options", 00:06:24.966 "accel_get_stats", 00:06:24.966 "accel_set_options", 00:06:24.966 "accel_set_driver", 00:06:24.966 "accel_crypto_key_destroy", 00:06:24.966 "accel_crypto_keys_get", 00:06:24.966 "accel_crypto_key_create", 00:06:24.966 "accel_assign_opc", 00:06:24.966 "accel_get_module_info", 00:06:24.966 "accel_get_opc_assignments", 00:06:24.966 "vmd_rescan", 00:06:24.966 "vmd_remove_device", 00:06:24.966 "vmd_enable", 00:06:24.966 "sock_get_default_impl", 00:06:24.966 "sock_set_default_impl", 00:06:24.966 "sock_impl_set_options", 00:06:24.966 "sock_impl_get_options", 00:06:24.966 "iobuf_get_stats", 00:06:24.966 "iobuf_set_options", 00:06:24.966 "keyring_get_keys", 00:06:24.966 "vfu_tgt_set_base_path", 00:06:24.966 "framework_get_pci_devices", 00:06:24.966 "framework_get_config", 00:06:24.966 "framework_get_subsystems", 00:06:24.966 "fsdev_set_opts", 00:06:24.966 "fsdev_get_opts", 00:06:24.966 "trace_get_info", 00:06:24.966 "trace_get_tpoint_group_mask", 00:06:24.966 "trace_disable_tpoint_group", 00:06:24.966 "trace_enable_tpoint_group", 00:06:24.966 "trace_clear_tpoint_mask", 00:06:24.966 "trace_set_tpoint_mask", 00:06:24.966 "notify_get_notifications", 00:06:24.966 "notify_get_types", 00:06:24.966 "spdk_get_version", 00:06:24.966 "rpc_get_methods" 00:06:24.966 ] 00:06:24.966 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:24.967 19:05:09 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.967 19:05:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.967 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:24.967 19:05:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 91432 00:06:24.967 19:05:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 91432 ']' 00:06:24.967 19:05:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 91432 00:06:24.967 19:05:09 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:24.967 19:05:09 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.967 19:05:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91432 00:06:25.225 19:05:10 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.225 19:05:10 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.225 19:05:10 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91432' 00:06:25.225 killing process with pid 91432 00:06:25.225 19:05:10 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 91432 00:06:25.225 19:05:10 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 91432 00:06:25.485 00:06:25.485 real 0m1.382s 00:06:25.485 user 0m2.523s 00:06:25.485 sys 0m0.493s 00:06:25.485 19:05:10 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.485 19:05:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:25.485 ************************************ 00:06:25.485 END TEST spdkcli_tcp 00:06:25.485 ************************************ 00:06:25.485 19:05:10 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:25.485 19:05:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.485 19:05:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.485 19:05:10 -- common/autotest_common.sh@10 -- # set +x 00:06:25.485 ************************************ 00:06:25.485 START TEST dpdk_mem_utility 00:06:25.485 ************************************ 00:06:25.485 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:25.746 * Looking for test storage... 00:06:25.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:25.746 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:25.746 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:25.746 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:25.746 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.746 19:05:10 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:25.746 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.746 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:25.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.746 --rc genhtml_branch_coverage=1 00:06:25.746 --rc genhtml_function_coverage=1 00:06:25.746 --rc genhtml_legend=1 00:06:25.746 --rc geninfo_all_blocks=1 00:06:25.746 --rc geninfo_unexecuted_blocks=1 00:06:25.746 00:06:25.746 ' 00:06:25.746 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:25.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.746 --rc genhtml_branch_coverage=1 00:06:25.746 --rc genhtml_function_coverage=1 00:06:25.746 --rc genhtml_legend=1 00:06:25.746 --rc geninfo_all_blocks=1 00:06:25.746 --rc geninfo_unexecuted_blocks=1 00:06:25.746 00:06:25.746 ' 00:06:25.746 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:25.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.746 --rc genhtml_branch_coverage=1 00:06:25.746 --rc genhtml_function_coverage=1 00:06:25.746 --rc genhtml_legend=1 00:06:25.746 --rc geninfo_all_blocks=1 00:06:25.746 --rc geninfo_unexecuted_blocks=1 00:06:25.746 00:06:25.746 ' 00:06:25.746 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:25.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.746 --rc genhtml_branch_coverage=1 00:06:25.746 --rc genhtml_function_coverage=1 00:06:25.746 --rc genhtml_legend=1 00:06:25.746 --rc geninfo_all_blocks=1 00:06:25.746 --rc geninfo_unexecuted_blocks=1 00:06:25.746 00:06:25.746 ' 00:06:25.746 19:05:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:25.746 19:05:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=91640 00:06:25.746 19:05:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:25.746 19:05:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 91640 00:06:25.746 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 91640 ']' 00:06:25.746 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.746 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.746 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.746 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.746 19:05:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:25.746 [2024-12-06 19:05:10.710882] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:25.746 [2024-12-06 19:05:10.710993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91640 ] 00:06:25.746 [2024-12-06 19:05:10.779544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.005 [2024-12-06 19:05:10.839904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.265 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.265 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:26.265 19:05:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:26.265 19:05:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:26.265 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.265 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:26.265 { 00:06:26.265 "filename": "/tmp/spdk_mem_dump.txt" 00:06:26.265 } 00:06:26.265 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.265 19:05:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:26.265 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:26.265 1 heaps totaling size 818.000000 MiB 00:06:26.265 size: 818.000000 MiB heap id: 0 00:06:26.265 end heaps---------- 00:06:26.265 9 mempools totaling size 603.782043 MiB 00:06:26.265 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:26.265 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:26.265 size: 100.555481 MiB name: bdev_io_91640 00:06:26.265 size: 50.003479 MiB name: msgpool_91640 00:06:26.265 size: 36.509338 MiB name: fsdev_io_91640 00:06:26.265 size: 21.763794 MiB name: PDU_Pool 00:06:26.265 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:26.265 size: 4.133484 MiB name: evtpool_91640 00:06:26.265 size: 0.026123 MiB name: Session_Pool 00:06:26.265 end mempools------- 00:06:26.265 6 memzones totaling size 4.142822 MiB 00:06:26.265 size: 1.000366 MiB name: RG_ring_0_91640 00:06:26.265 size: 1.000366 MiB name: RG_ring_1_91640 00:06:26.265 size: 1.000366 MiB name: RG_ring_4_91640 00:06:26.265 size: 1.000366 MiB name: RG_ring_5_91640 00:06:26.265 size: 0.125366 MiB name: RG_ring_2_91640 00:06:26.265 size: 0.015991 MiB name: RG_ring_3_91640 00:06:26.265 end memzones------- 00:06:26.265 19:05:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:26.265 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:26.265 list of free elements. size: 10.852478 MiB 00:06:26.265 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:26.265 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:26.265 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:26.265 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:26.265 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:26.265 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:26.265 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:26.265 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:26.265 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:06:26.265 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:26.265 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:26.265 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:26.265 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:26.265 element at address: 0x200028200000 with size: 0.410034 MiB 00:06:26.265 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:26.265 list of standard malloc elements. size: 199.218628 MiB 00:06:26.265 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:26.265 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:26.265 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:26.265 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:26.265 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:26.265 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:26.265 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:26.265 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:26.265 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:26.265 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:26.265 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:26.265 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:26.265 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:26.265 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:26.265 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:26.265 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:26.265 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:26.265 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:26.265 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:26.265 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:26.265 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:26.265 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:26.265 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:26.265 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:26.265 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:26.265 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:26.265 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:26.265 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:26.265 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:26.265 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:26.265 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:26.265 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:26.265 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:26.266 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:26.266 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:26.266 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:26.266 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:26.266 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:26.266 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:26.266 element at address: 0x200028268f80 with size: 0.000183 MiB 00:06:26.266 element at address: 0x200028269040 with size: 0.000183 MiB 00:06:26.266 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:06:26.266 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:26.266 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:26.266 list of memzone associated elements. size: 607.928894 MiB 00:06:26.266 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:26.266 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:26.266 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:26.266 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:26.266 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:26.266 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_91640_0 00:06:26.266 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:26.266 associated memzone info: size: 48.002930 MiB name: MP_msgpool_91640_0 00:06:26.266 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:26.266 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_91640_0 00:06:26.266 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:26.266 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:26.266 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:26.266 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:26.266 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:26.266 associated memzone info: size: 3.000122 MiB name: MP_evtpool_91640_0 00:06:26.266 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:26.266 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_91640 00:06:26.266 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:26.266 associated memzone info: size: 1.007996 MiB name: MP_evtpool_91640 00:06:26.266 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:26.266 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:26.266 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:26.266 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:26.266 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:26.266 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:26.266 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:26.266 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:26.266 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:26.266 associated memzone info: size: 1.000366 MiB name: RG_ring_0_91640 00:06:26.266 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:26.266 associated memzone info: size: 1.000366 MiB name: RG_ring_1_91640 00:06:26.266 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:26.266 associated memzone info: size: 1.000366 MiB name: RG_ring_4_91640 00:06:26.266 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:26.266 associated memzone info: size: 1.000366 MiB name: RG_ring_5_91640 00:06:26.266 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:26.266 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_91640 00:06:26.266 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:26.266 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_91640 00:06:26.266 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:26.266 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:26.266 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:26.266 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:26.266 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:26.266 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:26.266 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:26.266 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_91640 00:06:26.266 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:26.266 associated memzone info: size: 0.125366 MiB name: RG_ring_2_91640 00:06:26.266 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:26.266 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:26.266 element at address: 0x200028269100 with size: 0.023743 MiB 00:06:26.266 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:26.266 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:26.266 associated memzone info: size: 0.015991 MiB name: RG_ring_3_91640 00:06:26.266 element at address: 0x20002826f240 with size: 0.002441 MiB 00:06:26.266 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:26.266 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:26.266 associated memzone info: size: 0.000183 MiB name: MP_msgpool_91640 00:06:26.266 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:26.266 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_91640 00:06:26.266 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:26.266 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_91640 00:06:26.266 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:06:26.266 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:26.266 19:05:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:26.266 19:05:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 91640 00:06:26.266 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 91640 ']' 00:06:26.266 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 91640 00:06:26.266 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:26.266 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.266 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91640 00:06:26.266 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.266 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.266 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91640' 00:06:26.266 killing process with pid 91640 00:06:26.266 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 91640 00:06:26.266 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 91640 00:06:26.834 00:06:26.834 real 0m1.157s 00:06:26.834 user 0m1.160s 00:06:26.834 sys 0m0.415s 00:06:26.834 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.834 19:05:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:26.834 ************************************ 00:06:26.834 END TEST dpdk_mem_utility 00:06:26.834 ************************************ 00:06:26.834 19:05:11 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:26.834 19:05:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.834 19:05:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.834 19:05:11 -- common/autotest_common.sh@10 -- # set +x 00:06:26.834 ************************************ 00:06:26.834 START TEST event 00:06:26.834 ************************************ 00:06:26.834 19:05:11 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:26.834 * Looking for test storage... 00:06:26.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:26.834 19:05:11 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.834 19:05:11 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.834 19:05:11 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.834 19:05:11 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.834 19:05:11 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.834 19:05:11 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.834 19:05:11 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.834 19:05:11 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.834 19:05:11 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.834 19:05:11 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.834 19:05:11 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.834 19:05:11 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.834 19:05:11 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.834 19:05:11 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.834 19:05:11 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.834 19:05:11 event -- scripts/common.sh@344 -- # case "$op" in 00:06:26.834 19:05:11 event -- scripts/common.sh@345 -- # : 1 00:06:26.834 19:05:11 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.834 19:05:11 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.834 19:05:11 event -- scripts/common.sh@365 -- # decimal 1 00:06:26.834 19:05:11 event -- scripts/common.sh@353 -- # local d=1 00:06:26.834 19:05:11 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.834 19:05:11 event -- scripts/common.sh@355 -- # echo 1 00:06:26.834 19:05:11 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.834 19:05:11 event -- scripts/common.sh@366 -- # decimal 2 00:06:26.834 19:05:11 event -- scripts/common.sh@353 -- # local d=2 00:06:26.834 19:05:11 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.835 19:05:11 event -- scripts/common.sh@355 -- # echo 2 00:06:26.835 19:05:11 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.835 19:05:11 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.835 19:05:11 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.835 19:05:11 event -- scripts/common.sh@368 -- # return 0 00:06:26.835 19:05:11 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.835 19:05:11 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.835 --rc genhtml_branch_coverage=1 00:06:26.835 --rc genhtml_function_coverage=1 00:06:26.835 --rc genhtml_legend=1 00:06:26.835 --rc geninfo_all_blocks=1 00:06:26.835 --rc geninfo_unexecuted_blocks=1 00:06:26.835 00:06:26.835 ' 00:06:26.835 19:05:11 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.835 --rc genhtml_branch_coverage=1 00:06:26.835 --rc genhtml_function_coverage=1 00:06:26.835 --rc genhtml_legend=1 00:06:26.835 --rc geninfo_all_blocks=1 00:06:26.835 --rc geninfo_unexecuted_blocks=1 00:06:26.835 00:06:26.835 ' 00:06:26.835 19:05:11 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.835 --rc genhtml_branch_coverage=1 00:06:26.835 --rc genhtml_function_coverage=1 00:06:26.835 --rc genhtml_legend=1 00:06:26.835 --rc geninfo_all_blocks=1 00:06:26.835 --rc geninfo_unexecuted_blocks=1 00:06:26.835 00:06:26.835 ' 00:06:26.835 19:05:11 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.835 --rc genhtml_branch_coverage=1 00:06:26.835 --rc genhtml_function_coverage=1 00:06:26.835 --rc genhtml_legend=1 00:06:26.835 --rc geninfo_all_blocks=1 00:06:26.835 --rc geninfo_unexecuted_blocks=1 00:06:26.835 00:06:26.835 ' 00:06:26.835 19:05:11 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:26.835 19:05:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:26.835 19:05:11 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:26.835 19:05:11 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:26.835 19:05:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.835 19:05:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.094 ************************************ 00:06:27.094 START TEST event_perf 00:06:27.094 ************************************ 00:06:27.094 19:05:11 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:27.094 Running I/O for 1 seconds...[2024-12-06 19:05:11.912390] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:27.094 [2024-12-06 19:05:11.912459] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91865 ] 00:06:27.094 [2024-12-06 19:05:11.983058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.094 [2024-12-06 19:05:12.045640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.094 [2024-12-06 19:05:12.045697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.094 [2024-12-06 19:05:12.045764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.094 [2024-12-06 19:05:12.045768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.473 Running I/O for 1 seconds... 00:06:28.473 lcore 0: 227109 00:06:28.473 lcore 1: 227106 00:06:28.473 lcore 2: 227106 00:06:28.473 lcore 3: 227107 00:06:28.473 done. 00:06:28.473 00:06:28.473 real 0m1.214s 00:06:28.473 user 0m4.136s 00:06:28.473 sys 0m0.073s 00:06:28.473 19:05:13 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.473 19:05:13 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:28.473 ************************************ 00:06:28.473 END TEST event_perf 00:06:28.473 ************************************ 00:06:28.473 19:05:13 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:28.473 19:05:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:28.473 19:05:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.473 19:05:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.473 ************************************ 00:06:28.473 START TEST event_reactor 00:06:28.473 ************************************ 00:06:28.473 19:05:13 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:28.473 [2024-12-06 19:05:13.172861] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:28.473 [2024-12-06 19:05:13.172922] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92118 ] 00:06:28.473 [2024-12-06 19:05:13.239586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.473 [2024-12-06 19:05:13.294578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.412 test_start 00:06:29.412 oneshot 00:06:29.412 tick 100 00:06:29.412 tick 100 00:06:29.412 tick 250 00:06:29.412 tick 100 00:06:29.412 tick 100 00:06:29.412 tick 100 00:06:29.412 tick 250 00:06:29.412 tick 500 00:06:29.412 tick 100 00:06:29.412 tick 100 00:06:29.412 tick 250 00:06:29.412 tick 100 00:06:29.412 tick 100 00:06:29.412 test_end 00:06:29.412 00:06:29.412 real 0m1.197s 00:06:29.412 user 0m1.131s 00:06:29.412 sys 0m0.062s 00:06:29.412 19:05:14 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.412 19:05:14 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:29.412 ************************************ 00:06:29.412 END TEST event_reactor 00:06:29.412 ************************************ 00:06:29.412 19:05:14 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:29.412 19:05:14 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:29.412 19:05:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.412 19:05:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.412 ************************************ 00:06:29.412 START TEST event_reactor_perf 00:06:29.412 ************************************ 00:06:29.412 19:05:14 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:29.412 [2024-12-06 19:05:14.421814] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:29.412 [2024-12-06 19:05:14.421884] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92276 ] 00:06:29.671 [2024-12-06 19:05:14.488865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.671 [2024-12-06 19:05:14.541621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.607 test_start 00:06:30.607 test_end 00:06:30.607 Performance: 445952 events per second 00:06:30.607 00:06:30.607 real 0m1.196s 00:06:30.607 user 0m1.133s 00:06:30.607 sys 0m0.059s 00:06:30.607 19:05:15 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.607 19:05:15 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.607 ************************************ 00:06:30.607 END TEST event_reactor_perf 00:06:30.607 ************************************ 00:06:30.608 19:05:15 event -- event/event.sh@49 -- # uname -s 00:06:30.608 19:05:15 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:30.608 19:05:15 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:30.608 19:05:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.608 19:05:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.608 19:05:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.868 ************************************ 00:06:30.868 START TEST event_scheduler 00:06:30.868 ************************************ 00:06:30.868 19:05:15 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:30.868 * Looking for test storage... 00:06:30.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:30.868 19:05:15 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.868 19:05:15 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.868 19:05:15 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.868 19:05:15 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.868 19:05:15 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:30.868 19:05:15 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.868 19:05:15 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.868 --rc genhtml_branch_coverage=1 00:06:30.868 --rc genhtml_function_coverage=1 00:06:30.868 --rc genhtml_legend=1 00:06:30.868 --rc geninfo_all_blocks=1 00:06:30.868 --rc geninfo_unexecuted_blocks=1 00:06:30.868 00:06:30.868 ' 00:06:30.868 19:05:15 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.868 --rc genhtml_branch_coverage=1 00:06:30.868 --rc genhtml_function_coverage=1 00:06:30.868 --rc genhtml_legend=1 00:06:30.868 --rc geninfo_all_blocks=1 00:06:30.868 --rc geninfo_unexecuted_blocks=1 00:06:30.868 00:06:30.868 ' 00:06:30.868 19:05:15 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.868 --rc genhtml_branch_coverage=1 00:06:30.868 --rc genhtml_function_coverage=1 00:06:30.868 --rc genhtml_legend=1 00:06:30.868 --rc geninfo_all_blocks=1 00:06:30.868 --rc geninfo_unexecuted_blocks=1 00:06:30.868 00:06:30.868 ' 00:06:30.868 19:05:15 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.868 --rc genhtml_branch_coverage=1 00:06:30.868 --rc genhtml_function_coverage=1 00:06:30.868 --rc genhtml_legend=1 00:06:30.868 --rc geninfo_all_blocks=1 00:06:30.868 --rc geninfo_unexecuted_blocks=1 00:06:30.868 00:06:30.868 ' 00:06:30.868 19:05:15 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:30.868 19:05:15 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=92464 00:06:30.868 19:05:15 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:30.868 19:05:15 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.868 19:05:15 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 92464 00:06:30.868 19:05:15 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 92464 ']' 00:06:30.868 19:05:15 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.868 19:05:15 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.869 19:05:15 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.869 19:05:15 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.869 19:05:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.869 [2024-12-06 19:05:15.849950] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:30.869 [2024-12-06 19:05:15.850055] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92464 ] 00:06:31.127 [2024-12-06 19:05:15.919429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.127 [2024-12-06 19:05:15.981596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.127 [2024-12-06 19:05:15.981511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.127 [2024-12-06 19:05:15.981535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.127 [2024-12-06 19:05:15.981593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.127 19:05:16 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.127 19:05:16 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:31.127 19:05:16 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:31.127 19:05:16 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.127 19:05:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:31.127 [2024-12-06 19:05:16.102551] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:31.127 [2024-12-06 19:05:16.102576] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:31.127 [2024-12-06 19:05:16.102609] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:31.127 [2024-12-06 19:05:16.102621] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:31.127 [2024-12-06 19:05:16.102632] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:31.127 19:05:16 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.127 19:05:16 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:31.127 19:05:16 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.127 19:05:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:31.386 [2024-12-06 19:05:16.204178] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:31.386 19:05:16 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.386 19:05:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:31.386 19:05:16 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.386 19:05:16 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.386 19:05:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:31.386 ************************************ 00:06:31.386 START TEST scheduler_create_thread 00:06:31.386 ************************************ 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.386 2 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.386 3 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.386 4 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.386 5 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.386 6 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.386 7 00:06:31.386 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.387 8 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.387 9 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.387 10 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.387 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.952 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.952 00:06:31.952 real 0m0.593s 00:06:31.952 user 0m0.010s 00:06:31.952 sys 0m0.003s 00:06:31.952 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.952 19:05:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.952 ************************************ 00:06:31.952 END TEST scheduler_create_thread 00:06:31.952 ************************************ 00:06:31.952 19:05:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:31.952 19:05:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 92464 00:06:31.952 19:05:16 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 92464 ']' 00:06:31.952 19:05:16 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 92464 00:06:31.952 19:05:16 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:31.952 19:05:16 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.952 19:05:16 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92464 00:06:31.952 19:05:16 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:31.952 19:05:16 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:31.952 19:05:16 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92464' 00:06:31.952 killing process with pid 92464 00:06:31.952 19:05:16 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 92464 00:06:31.952 19:05:16 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 92464 00:06:32.517 [2024-12-06 19:05:17.304512] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:32.517 00:06:32.517 real 0m1.868s 00:06:32.517 user 0m2.586s 00:06:32.517 sys 0m0.354s 00:06:32.517 19:05:17 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.517 19:05:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:32.517 ************************************ 00:06:32.517 END TEST event_scheduler 00:06:32.517 ************************************ 00:06:32.517 19:05:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:32.517 19:05:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:32.517 19:05:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.517 19:05:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.517 19:05:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.776 ************************************ 00:06:32.776 START TEST app_repeat 00:06:32.776 ************************************ 00:06:32.776 19:05:17 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:32.776 19:05:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.776 19:05:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.776 19:05:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:32.776 19:05:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.776 19:05:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:32.776 19:05:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:32.776 19:05:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:32.776 19:05:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=92775 00:06:32.776 19:05:17 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:32.776 19:05:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.776 19:05:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 92775' 00:06:32.776 Process app_repeat pid: 92775 00:06:32.776 19:05:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:32.776 19:05:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:32.776 spdk_app_start Round 0 00:06:32.776 19:05:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 92775 /var/tmp/spdk-nbd.sock 00:06:32.776 19:05:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 92775 ']' 00:06:32.776 19:05:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.776 19:05:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.776 19:05:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.776 19:05:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.776 19:05:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.776 [2024-12-06 19:05:17.595249] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:32.776 [2024-12-06 19:05:17.595319] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92775 ] 00:06:32.776 [2024-12-06 19:05:17.661004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.776 [2024-12-06 19:05:17.721918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.776 [2024-12-06 19:05:17.721922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.033 19:05:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.033 19:05:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:33.033 19:05:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.291 Malloc0 00:06:33.291 19:05:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.550 Malloc1 00:06:33.550 19:05:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.550 19:05:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.550 19:05:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.550 19:05:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:33.550 19:05:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.550 19:05:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:33.550 19:05:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.550 19:05:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.550 19:05:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.550 19:05:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:33.550 19:05:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.550 19:05:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:33.550 19:05:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:33.550 19:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:33.550 19:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.550 19:05:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:33.809 /dev/nbd0 00:06:33.809 19:05:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:33.809 19:05:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:33.809 19:05:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:33.809 19:05:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:33.809 19:05:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:33.809 19:05:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:33.809 19:05:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:33.809 19:05:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:33.809 19:05:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:33.809 19:05:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:33.809 19:05:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.809 1+0 records in 00:06:33.809 1+0 records out 00:06:33.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289476 s, 14.1 MB/s 00:06:33.809 19:05:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:33.809 19:05:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:33.809 19:05:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:33.809 19:05:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:33.809 19:05:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:33.809 19:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.809 19:05:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.809 19:05:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:34.067 /dev/nbd1 00:06:34.067 19:05:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:34.067 19:05:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:34.067 19:05:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:34.067 19:05:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:34.067 19:05:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:34.067 19:05:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:34.067 19:05:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:34.067 19:05:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:34.067 19:05:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:34.067 19:05:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:34.067 19:05:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.067 1+0 records in 00:06:34.067 1+0 records out 00:06:34.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217355 s, 18.8 MB/s 00:06:34.067 19:05:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.067 19:05:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:34.067 19:05:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.067 19:05:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:34.067 19:05:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:34.067 19:05:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.067 19:05:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.067 19:05:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.067 19:05:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.067 19:05:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.325 19:05:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:34.325 { 00:06:34.325 "nbd_device": "/dev/nbd0", 00:06:34.325 "bdev_name": "Malloc0" 00:06:34.325 }, 00:06:34.325 { 00:06:34.325 "nbd_device": "/dev/nbd1", 00:06:34.325 "bdev_name": "Malloc1" 00:06:34.325 } 00:06:34.325 ]' 00:06:34.325 19:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:34.325 { 00:06:34.325 "nbd_device": "/dev/nbd0", 00:06:34.325 "bdev_name": "Malloc0" 00:06:34.325 }, 00:06:34.325 { 00:06:34.325 "nbd_device": "/dev/nbd1", 00:06:34.325 "bdev_name": "Malloc1" 00:06:34.325 } 00:06:34.325 ]' 00:06:34.325 19:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:34.583 /dev/nbd1' 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:34.583 /dev/nbd1' 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:34.583 256+0 records in 00:06:34.583 256+0 records out 00:06:34.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00572631 s, 183 MB/s 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:34.583 256+0 records in 00:06:34.583 256+0 records out 00:06:34.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209009 s, 50.2 MB/s 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:34.583 256+0 records in 00:06:34.583 256+0 records out 00:06:34.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228536 s, 45.9 MB/s 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:34.583 19:05:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:34.584 19:05:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:34.584 19:05:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.584 19:05:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.584 19:05:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.584 19:05:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:34.584 19:05:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.584 19:05:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.842 19:05:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.842 19:05:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.842 19:05:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.842 19:05:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.842 19:05:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.842 19:05:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.842 19:05:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.842 19:05:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.842 19:05:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.842 19:05:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.101 19:05:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.101 19:05:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.101 19:05:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.101 19:05:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.101 19:05:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.101 19:05:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.101 19:05:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.101 19:05:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.101 19:05:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.101 19:05:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.101 19:05:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.359 19:05:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:35.359 19:05:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:35.359 19:05:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.359 19:05:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:35.359 19:05:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:35.359 19:05:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.359 19:05:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:35.359 19:05:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:35.359 19:05:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:35.359 19:05:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:35.359 19:05:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:35.359 19:05:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:35.359 19:05:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:35.927 19:05:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:35.927 [2024-12-06 19:05:20.890101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.927 [2024-12-06 19:05:20.946820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.927 [2024-12-06 19:05:20.946820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.185 [2024-12-06 19:05:21.000549] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:36.185 [2024-12-06 19:05:21.000635] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:38.714 19:05:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:38.714 19:05:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:38.714 spdk_app_start Round 1 00:06:38.714 19:05:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 92775 /var/tmp/spdk-nbd.sock 00:06:38.714 19:05:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 92775 ']' 00:06:38.714 19:05:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.714 19:05:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.714 19:05:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.714 19:05:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.714 19:05:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.972 19:05:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.972 19:05:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:38.972 19:05:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.231 Malloc0 00:06:39.231 19:05:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.490 Malloc1 00:06:39.748 19:05:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.748 19:05:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.748 19:05:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.748 19:05:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.748 19:05:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.748 19:05:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.748 19:05:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.748 19:05:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.748 19:05:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.748 19:05:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.748 19:05:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.748 19:05:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.748 19:05:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:39.748 19:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.748 19:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.748 19:05:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:40.005 /dev/nbd0 00:06:40.005 19:05:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.005 19:05:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.005 19:05:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:40.005 19:05:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:40.005 19:05:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:40.005 19:05:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:40.005 19:05:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:40.005 19:05:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:40.005 19:05:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:40.005 19:05:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:40.005 19:05:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.005 1+0 records in 00:06:40.005 1+0 records out 00:06:40.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289712 s, 14.1 MB/s 00:06:40.005 19:05:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.005 19:05:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:40.005 19:05:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.005 19:05:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:40.005 19:05:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:40.005 19:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.006 19:05:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.006 19:05:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:40.263 /dev/nbd1 00:06:40.263 19:05:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:40.263 19:05:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:40.263 19:05:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:40.263 19:05:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:40.263 19:05:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:40.263 19:05:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:40.263 19:05:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:40.263 19:05:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:40.263 19:05:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:40.263 19:05:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:40.263 19:05:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.263 1+0 records in 00:06:40.263 1+0 records out 00:06:40.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000151622 s, 27.0 MB/s 00:06:40.263 19:05:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.263 19:05:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:40.263 19:05:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.263 19:05:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:40.263 19:05:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:40.263 19:05:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.263 19:05:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.263 19:05:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.263 19:05:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.263 19:05:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.520 19:05:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.520 { 00:06:40.520 "nbd_device": "/dev/nbd0", 00:06:40.520 "bdev_name": "Malloc0" 00:06:40.520 }, 00:06:40.520 { 00:06:40.520 "nbd_device": "/dev/nbd1", 00:06:40.520 "bdev_name": "Malloc1" 00:06:40.520 } 00:06:40.520 ]' 00:06:40.520 19:05:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.520 { 00:06:40.520 "nbd_device": "/dev/nbd0", 00:06:40.520 "bdev_name": "Malloc0" 00:06:40.520 }, 00:06:40.520 { 00:06:40.520 "nbd_device": "/dev/nbd1", 00:06:40.520 "bdev_name": "Malloc1" 00:06:40.520 } 00:06:40.520 ]' 00:06:40.520 19:05:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.520 19:05:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.520 /dev/nbd1' 00:06:40.520 19:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.520 /dev/nbd1' 00:06:40.520 19:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.520 19:05:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.520 19:05:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.520 19:05:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.520 19:05:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.520 19:05:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.520 19:05:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.520 19:05:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.520 19:05:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.520 19:05:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.521 19:05:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.521 19:05:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.521 256+0 records in 00:06:40.521 256+0 records out 00:06:40.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506468 s, 207 MB/s 00:06:40.521 19:05:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.521 19:05:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.521 256+0 records in 00:06:40.521 256+0 records out 00:06:40.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206943 s, 50.7 MB/s 00:06:40.521 19:05:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.521 19:05:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.778 256+0 records in 00:06:40.778 256+0 records out 00:06:40.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023167 s, 45.3 MB/s 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.778 19:05:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.037 19:05:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.037 19:05:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.037 19:05:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.037 19:05:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.037 19:05:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.037 19:05:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.037 19:05:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.037 19:05:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.037 19:05:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.037 19:05:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.295 19:05:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.295 19:05:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.295 19:05:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.295 19:05:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.295 19:05:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.295 19:05:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.295 19:05:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.295 19:05:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.295 19:05:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.295 19:05:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.295 19:05:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.553 19:05:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.553 19:05:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.553 19:05:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.553 19:05:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.553 19:05:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.553 19:05:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.553 19:05:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:41.553 19:05:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.553 19:05:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.553 19:05:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.553 19:05:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.553 19:05:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.553 19:05:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.811 19:05:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:42.070 [2024-12-06 19:05:27.005444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.070 [2024-12-06 19:05:27.059878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.070 [2024-12-06 19:05:27.059879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.070 [2024-12-06 19:05:27.118895] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:42.070 [2024-12-06 19:05:27.118982] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.353 19:05:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:45.353 19:05:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:45.353 spdk_app_start Round 2 00:06:45.353 19:05:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 92775 /var/tmp/spdk-nbd.sock 00:06:45.353 19:05:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 92775 ']' 00:06:45.353 19:05:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.353 19:05:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.353 19:05:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.353 19:05:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.353 19:05:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.353 19:05:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.353 19:05:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:45.353 19:05:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.353 Malloc0 00:06:45.353 19:05:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.611 Malloc1 00:06:45.611 19:05:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.611 19:05:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.611 19:05:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.611 19:05:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.611 19:05:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.611 19:05:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.611 19:05:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.611 19:05:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.611 19:05:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.611 19:05:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.611 19:05:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.611 19:05:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.611 19:05:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:45.611 19:05:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.611 19:05:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.611 19:05:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.178 /dev/nbd0 00:06:46.178 19:05:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.178 19:05:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.178 19:05:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:46.178 19:05:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:46.178 19:05:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.178 19:05:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.178 19:05:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:46.178 19:05:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:46.178 19:05:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.178 19:05:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.178 19:05:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.178 1+0 records in 00:06:46.178 1+0 records out 00:06:46.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190331 s, 21.5 MB/s 00:06:46.178 19:05:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.178 19:05:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:46.178 19:05:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.178 19:05:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.178 19:05:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:46.178 19:05:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.178 19:05:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.178 19:05:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.436 /dev/nbd1 00:06:46.436 19:05:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.436 19:05:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.436 19:05:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:46.436 19:05:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:46.436 19:05:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.436 19:05:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.436 19:05:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:46.436 19:05:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:46.436 19:05:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.436 19:05:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.436 19:05:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.436 1+0 records in 00:06:46.436 1+0 records out 00:06:46.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195134 s, 21.0 MB/s 00:06:46.436 19:05:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.436 19:05:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:46.436 19:05:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.436 19:05:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.436 19:05:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:46.436 19:05:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.436 19:05:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.436 19:05:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.436 19:05:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.436 19:05:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.694 19:05:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.694 { 00:06:46.694 "nbd_device": "/dev/nbd0", 00:06:46.694 "bdev_name": "Malloc0" 00:06:46.694 }, 00:06:46.694 { 00:06:46.694 "nbd_device": "/dev/nbd1", 00:06:46.694 "bdev_name": "Malloc1" 00:06:46.694 } 00:06:46.694 ]' 00:06:46.694 19:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.694 { 00:06:46.694 "nbd_device": "/dev/nbd0", 00:06:46.694 "bdev_name": "Malloc0" 00:06:46.694 }, 00:06:46.694 { 00:06:46.694 "nbd_device": "/dev/nbd1", 00:06:46.694 "bdev_name": "Malloc1" 00:06:46.694 } 00:06:46.694 ]' 00:06:46.694 19:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.694 19:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.694 /dev/nbd1' 00:06:46.694 19:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.694 /dev/nbd1' 00:06:46.694 19:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.694 19:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.694 19:05:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.694 19:05:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.694 19:05:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.694 19:05:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.694 19:05:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.694 19:05:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.694 19:05:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.694 19:05:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.694 19:05:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.694 19:05:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.694 256+0 records in 00:06:46.694 256+0 records out 00:06:46.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509098 s, 206 MB/s 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.695 256+0 records in 00:06:46.695 256+0 records out 00:06:46.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207906 s, 50.4 MB/s 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.695 256+0 records in 00:06:46.695 256+0 records out 00:06:46.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225652 s, 46.5 MB/s 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.695 19:05:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.953 19:05:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.953 19:05:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.953 19:05:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.953 19:05:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.953 19:05:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.953 19:05:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.953 19:05:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.953 19:05:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.953 19:05:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.953 19:05:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.520 19:05:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.520 19:05:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.520 19:05:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.520 19:05:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.520 19:05:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.520 19:05:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.520 19:05:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.520 19:05:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.520 19:05:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.520 19:05:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.520 19:05:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.520 19:05:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.520 19:05:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.520 19:05:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.777 19:05:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.777 19:05:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.777 19:05:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.777 19:05:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.777 19:05:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.777 19:05:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.777 19:05:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.777 19:05:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.777 19:05:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.777 19:05:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:48.034 19:05:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:48.034 [2024-12-06 19:05:33.083685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.293 [2024-12-06 19:05:33.139693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.293 [2024-12-06 19:05:33.139697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.293 [2024-12-06 19:05:33.193629] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:48.293 [2024-12-06 19:05:33.193763] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.575 19:05:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 92775 /var/tmp/spdk-nbd.sock 00:06:51.575 19:05:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 92775 ']' 00:06:51.575 19:05:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.575 19:05:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.575 19:05:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.575 19:05:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.575 19:05:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.575 19:05:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.575 19:05:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:51.575 19:05:36 event.app_repeat -- event/event.sh@39 -- # killprocess 92775 00:06:51.575 19:05:36 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 92775 ']' 00:06:51.575 19:05:36 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 92775 00:06:51.575 19:05:36 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:51.575 19:05:36 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.575 19:05:36 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92775 00:06:51.575 19:05:36 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.575 19:05:36 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.575 19:05:36 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92775' 00:06:51.575 killing process with pid 92775 00:06:51.575 19:05:36 event.app_repeat -- common/autotest_common.sh@973 -- # kill 92775 00:06:51.575 19:05:36 event.app_repeat -- common/autotest_common.sh@978 -- # wait 92775 00:06:51.575 spdk_app_start is called in Round 0. 00:06:51.575 Shutdown signal received, stop current app iteration 00:06:51.576 Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 reinitialization... 00:06:51.576 spdk_app_start is called in Round 1. 00:06:51.576 Shutdown signal received, stop current app iteration 00:06:51.576 Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 reinitialization... 00:06:51.576 spdk_app_start is called in Round 2. 00:06:51.576 Shutdown signal received, stop current app iteration 00:06:51.576 Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 reinitialization... 00:06:51.576 spdk_app_start is called in Round 3. 00:06:51.576 Shutdown signal received, stop current app iteration 00:06:51.576 19:05:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:51.576 19:05:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:51.576 00:06:51.576 real 0m18.784s 00:06:51.576 user 0m41.430s 00:06:51.576 sys 0m3.266s 00:06:51.576 19:05:36 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.576 19:05:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.576 ************************************ 00:06:51.576 END TEST app_repeat 00:06:51.576 ************************************ 00:06:51.576 19:05:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:51.576 19:05:36 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:51.576 19:05:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.576 19:05:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.576 19:05:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.576 ************************************ 00:06:51.576 START TEST cpu_locks 00:06:51.576 ************************************ 00:06:51.576 19:05:36 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:51.576 * Looking for test storage... 00:06:51.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:51.576 19:05:36 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:51.576 19:05:36 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:51.576 19:05:36 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:51.576 19:05:36 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.576 19:05:36 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:51.576 19:05:36 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.576 19:05:36 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:51.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.576 --rc genhtml_branch_coverage=1 00:06:51.576 --rc genhtml_function_coverage=1 00:06:51.576 --rc genhtml_legend=1 00:06:51.576 --rc geninfo_all_blocks=1 00:06:51.576 --rc geninfo_unexecuted_blocks=1 00:06:51.576 00:06:51.576 ' 00:06:51.576 19:05:36 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:51.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.576 --rc genhtml_branch_coverage=1 00:06:51.576 --rc genhtml_function_coverage=1 00:06:51.576 --rc genhtml_legend=1 00:06:51.576 --rc geninfo_all_blocks=1 00:06:51.576 --rc geninfo_unexecuted_blocks=1 00:06:51.576 00:06:51.576 ' 00:06:51.576 19:05:36 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:51.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.576 --rc genhtml_branch_coverage=1 00:06:51.576 --rc genhtml_function_coverage=1 00:06:51.576 --rc genhtml_legend=1 00:06:51.576 --rc geninfo_all_blocks=1 00:06:51.576 --rc geninfo_unexecuted_blocks=1 00:06:51.576 00:06:51.576 ' 00:06:51.576 19:05:36 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:51.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.576 --rc genhtml_branch_coverage=1 00:06:51.576 --rc genhtml_function_coverage=1 00:06:51.576 --rc genhtml_legend=1 00:06:51.576 --rc geninfo_all_blocks=1 00:06:51.576 --rc geninfo_unexecuted_blocks=1 00:06:51.576 00:06:51.576 ' 00:06:51.576 19:05:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:51.576 19:05:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:51.576 19:05:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:51.576 19:05:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:51.576 19:05:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.576 19:05:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.576 19:05:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.576 ************************************ 00:06:51.576 START TEST default_locks 00:06:51.576 ************************************ 00:06:51.576 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:51.576 19:05:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=95262 00:06:51.576 19:05:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.576 19:05:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 95262 00:06:51.576 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 95262 ']' 00:06:51.576 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.576 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.576 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.576 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.576 19:05:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.836 [2024-12-06 19:05:36.643382] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:51.836 [2024-12-06 19:05:36.643486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95262 ] 00:06:51.836 [2024-12-06 19:05:36.709926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.836 [2024-12-06 19:05:36.769691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.095 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.095 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:52.095 19:05:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 95262 00:06:52.095 19:05:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 95262 00:06:52.095 19:05:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.354 lslocks: write error 00:06:52.354 19:05:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 95262 00:06:52.354 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 95262 ']' 00:06:52.354 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 95262 00:06:52.354 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:52.354 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.354 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95262 00:06:52.354 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.354 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.354 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95262' 00:06:52.354 killing process with pid 95262 00:06:52.354 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 95262 00:06:52.354 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 95262 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 95262 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 95262 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 95262 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 95262 ']' 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (95262) - No such process 00:06:52.920 ERROR: process (pid: 95262) is no longer running 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:52.920 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.921 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.921 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.921 19:05:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:52.921 19:05:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:52.921 19:05:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:52.921 19:05:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:52.921 00:06:52.921 real 0m1.119s 00:06:52.921 user 0m1.088s 00:06:52.921 sys 0m0.501s 00:06:52.921 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.921 19:05:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.921 ************************************ 00:06:52.921 END TEST default_locks 00:06:52.921 ************************************ 00:06:52.921 19:05:37 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:52.921 19:05:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.921 19:05:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.921 19:05:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.921 ************************************ 00:06:52.921 START TEST default_locks_via_rpc 00:06:52.921 ************************************ 00:06:52.921 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:52.921 19:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=95430 00:06:52.921 19:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.921 19:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 95430 00:06:52.921 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 95430 ']' 00:06:52.921 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.921 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.921 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.921 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.921 19:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.921 [2024-12-06 19:05:37.810211] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:52.921 [2024-12-06 19:05:37.810295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95430 ] 00:06:52.921 [2024-12-06 19:05:37.874107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.921 [2024-12-06 19:05:37.926458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.181 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.181 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.181 19:05:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:53.181 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.181 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.181 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.181 19:05:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:53.181 19:05:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:53.181 19:05:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:53.181 19:05:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:53.181 19:05:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.181 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.181 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.181 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.182 19:05:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 95430 00:06:53.182 19:05:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 95430 00:06:53.182 19:05:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.440 19:05:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 95430 00:06:53.440 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 95430 ']' 00:06:53.440 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 95430 00:06:53.440 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:53.440 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.440 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95430 00:06:53.699 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.699 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.699 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95430' 00:06:53.699 killing process with pid 95430 00:06:53.699 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 95430 00:06:53.699 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 95430 00:06:53.958 00:06:53.958 real 0m1.153s 00:06:53.958 user 0m1.126s 00:06:53.958 sys 0m0.483s 00:06:53.958 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.958 19:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.958 ************************************ 00:06:53.958 END TEST default_locks_via_rpc 00:06:53.958 ************************************ 00:06:53.958 19:05:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:53.958 19:05:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.958 19:05:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.958 19:05:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.958 ************************************ 00:06:53.958 START TEST non_locking_app_on_locked_coremask 00:06:53.958 ************************************ 00:06:53.958 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:53.958 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=95592 00:06:53.958 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.958 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 95592 /var/tmp/spdk.sock 00:06:53.958 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 95592 ']' 00:06:53.958 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.958 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.958 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.958 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.958 19:05:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.218 [2024-12-06 19:05:39.018499] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:54.218 [2024-12-06 19:05:39.018606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95592 ] 00:06:54.218 [2024-12-06 19:05:39.084947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.218 [2024-12-06 19:05:39.144621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.477 19:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.478 19:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:54.478 19:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=95599 00:06:54.478 19:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:54.478 19:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 95599 /var/tmp/spdk2.sock 00:06:54.478 19:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 95599 ']' 00:06:54.478 19:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.478 19:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.478 19:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.478 19:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.478 19:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.478 [2024-12-06 19:05:39.459885] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:54.478 [2024-12-06 19:05:39.459986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95599 ] 00:06:54.736 [2024-12-06 19:05:39.558124] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.736 [2024-12-06 19:05:39.558155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.736 [2024-12-06 19:05:39.675596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.672 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.672 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:55.672 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 95592 00:06:55.672 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 95592 00:06:55.672 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.931 lslocks: write error 00:06:55.931 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 95592 00:06:55.931 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 95592 ']' 00:06:55.931 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 95592 00:06:55.931 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:55.931 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.931 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95592 00:06:55.931 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.931 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.931 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95592' 00:06:55.931 killing process with pid 95592 00:06:55.931 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 95592 00:06:55.931 19:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 95592 00:06:56.866 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 95599 00:06:56.866 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 95599 ']' 00:06:56.866 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 95599 00:06:56.866 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:56.866 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.866 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95599 00:06:56.866 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.866 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.866 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95599' 00:06:56.866 killing process with pid 95599 00:06:56.866 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 95599 00:06:56.866 19:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 95599 00:06:57.125 00:06:57.125 real 0m3.183s 00:06:57.125 user 0m3.458s 00:06:57.125 sys 0m1.007s 00:06:57.126 19:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.126 19:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.126 ************************************ 00:06:57.126 END TEST non_locking_app_on_locked_coremask 00:06:57.126 ************************************ 00:06:57.126 19:05:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:57.126 19:05:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.126 19:05:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.126 19:05:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.385 ************************************ 00:06:57.385 START TEST locking_app_on_unlocked_coremask 00:06:57.385 ************************************ 00:06:57.385 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:57.385 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=95962 00:06:57.385 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:57.385 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 95962 /var/tmp/spdk.sock 00:06:57.385 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 95962 ']' 00:06:57.385 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.385 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.385 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.385 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.385 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.385 [2024-12-06 19:05:42.253235] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:57.385 [2024-12-06 19:05:42.253310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95962 ] 00:06:57.385 [2024-12-06 19:05:42.317875] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.385 [2024-12-06 19:05:42.317906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.385 [2024-12-06 19:05:42.371453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.644 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.645 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:57.645 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=96029 00:06:57.645 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:57.645 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 96029 /var/tmp/spdk2.sock 00:06:57.645 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 96029 ']' 00:06:57.645 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.645 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.645 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.645 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.645 19:05:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.645 [2024-12-06 19:05:42.684291] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:06:57.645 [2024-12-06 19:05:42.684370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96029 ] 00:06:57.909 [2024-12-06 19:05:42.790056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.909 [2024-12-06 19:05:42.901315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.844 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.844 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:58.845 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 96029 00:06:58.845 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 96029 00:06:58.845 19:05:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:59.410 lslocks: write error 00:06:59.410 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 95962 00:06:59.410 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 95962 ']' 00:06:59.410 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 95962 00:06:59.410 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:59.410 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.410 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95962 00:06:59.410 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.410 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.410 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95962' 00:06:59.410 killing process with pid 95962 00:06:59.410 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 95962 00:06:59.410 19:05:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 95962 00:06:59.977 19:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 96029 00:06:59.977 19:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 96029 ']' 00:06:59.977 19:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 96029 00:06:59.977 19:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:59.977 19:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.977 19:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96029 00:07:00.235 19:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.235 19:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.235 19:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96029' 00:07:00.235 killing process with pid 96029 00:07:00.235 19:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 96029 00:07:00.235 19:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 96029 00:07:00.494 00:07:00.494 real 0m3.279s 00:07:00.494 user 0m3.512s 00:07:00.494 sys 0m1.035s 00:07:00.494 19:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.494 19:05:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.494 ************************************ 00:07:00.494 END TEST locking_app_on_unlocked_coremask 00:07:00.494 ************************************ 00:07:00.494 19:05:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:00.494 19:05:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.494 19:05:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.494 19:05:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.494 ************************************ 00:07:00.494 START TEST locking_app_on_locked_coremask 00:07:00.494 ************************************ 00:07:00.494 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:00.494 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=96345 00:07:00.494 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.494 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 96345 /var/tmp/spdk.sock 00:07:00.494 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 96345 ']' 00:07:00.494 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.494 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.494 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.494 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.494 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.752 [2024-12-06 19:05:45.586685] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:07:00.752 [2024-12-06 19:05:45.586829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96345 ] 00:07:00.752 [2024-12-06 19:05:45.654324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.752 [2024-12-06 19:05:45.712491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=96469 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 96469 /var/tmp/spdk2.sock 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 96469 /var/tmp/spdk2.sock 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 96469 /var/tmp/spdk2.sock 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 96469 ']' 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.010 19:05:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.010 [2024-12-06 19:05:46.033334] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:07:01.010 [2024-12-06 19:05:46.033428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96469 ] 00:07:01.267 [2024-12-06 19:05:46.130316] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 96345 has claimed it. 00:07:01.267 [2024-12-06 19:05:46.130377] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:01.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (96469) - No such process 00:07:01.832 ERROR: process (pid: 96469) is no longer running 00:07:01.832 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.832 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:01.832 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:01.832 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.832 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.832 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.832 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 96345 00:07:01.832 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 96345 00:07:01.832 19:05:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.090 lslocks: write error 00:07:02.090 19:05:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 96345 00:07:02.090 19:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 96345 ']' 00:07:02.090 19:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 96345 00:07:02.090 19:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:02.349 19:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.349 19:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96345 00:07:02.349 19:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.349 19:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.349 19:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96345' 00:07:02.349 killing process with pid 96345 00:07:02.349 19:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 96345 00:07:02.349 19:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 96345 00:07:02.608 00:07:02.608 real 0m2.047s 00:07:02.608 user 0m2.256s 00:07:02.608 sys 0m0.658s 00:07:02.608 19:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.608 19:05:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.608 ************************************ 00:07:02.608 END TEST locking_app_on_locked_coremask 00:07:02.608 ************************************ 00:07:02.608 19:05:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:02.608 19:05:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.608 19:05:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.608 19:05:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.608 ************************************ 00:07:02.608 START TEST locking_overlapped_coremask 00:07:02.608 ************************************ 00:07:02.608 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:02.609 19:05:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=96641 00:07:02.609 19:05:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:02.609 19:05:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 96641 /var/tmp/spdk.sock 00:07:02.609 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 96641 ']' 00:07:02.609 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.609 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.609 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.609 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.609 19:05:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.869 [2024-12-06 19:05:47.689476] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:07:02.869 [2024-12-06 19:05:47.689557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96641 ] 00:07:02.869 [2024-12-06 19:05:47.752804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.869 [2024-12-06 19:05:47.812848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.869 [2024-12-06 19:05:47.812910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.869 [2024-12-06 19:05:47.812914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.128 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.129 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:03.129 19:05:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=96769 00:07:03.129 19:05:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 96769 /var/tmp/spdk2.sock 00:07:03.129 19:05:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:03.129 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:03.129 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 96769 /var/tmp/spdk2.sock 00:07:03.129 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:03.129 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.129 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:03.129 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.129 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 96769 /var/tmp/spdk2.sock 00:07:03.129 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 96769 ']' 00:07:03.129 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.129 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.129 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.129 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.129 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.129 [2024-12-06 19:05:48.150660] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:07:03.129 [2024-12-06 19:05:48.150774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96769 ] 00:07:03.388 [2024-12-06 19:05:48.258612] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 96641 has claimed it. 00:07:03.388 [2024-12-06 19:05:48.258684] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:03.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (96769) - No such process 00:07:03.956 ERROR: process (pid: 96769) is no longer running 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 96641 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 96641 ']' 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 96641 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96641 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96641' 00:07:03.956 killing process with pid 96641 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 96641 00:07:03.956 19:05:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 96641 00:07:04.524 00:07:04.524 real 0m1.686s 00:07:04.524 user 0m4.726s 00:07:04.524 sys 0m0.437s 00:07:04.524 19:05:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.524 19:05:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.524 ************************************ 00:07:04.524 END TEST locking_overlapped_coremask 00:07:04.524 ************************************ 00:07:04.524 19:05:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:04.524 19:05:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.524 19:05:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.524 19:05:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.524 ************************************ 00:07:04.524 START TEST locking_overlapped_coremask_via_rpc 00:07:04.524 ************************************ 00:07:04.524 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:04.524 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=96931 00:07:04.524 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:04.524 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 96931 /var/tmp/spdk.sock 00:07:04.524 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 96931 ']' 00:07:04.524 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.524 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.524 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.524 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.524 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.524 [2024-12-06 19:05:49.427102] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:07:04.524 [2024-12-06 19:05:49.427178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96931 ] 00:07:04.524 [2024-12-06 19:05:49.491832] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:04.524 [2024-12-06 19:05:49.491865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.524 [2024-12-06 19:05:49.550939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.524 [2024-12-06 19:05:49.551005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.524 [2024-12-06 19:05:49.551007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.783 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.783 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:04.783 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=96952 00:07:04.783 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:04.783 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 96952 /var/tmp/spdk2.sock 00:07:04.783 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 96952 ']' 00:07:04.783 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.783 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.783 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.783 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.783 19:05:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.042 [2024-12-06 19:05:49.882822] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:07:05.042 [2024-12-06 19:05:49.882910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96952 ] 00:07:05.042 [2024-12-06 19:05:49.986926] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:05.042 [2024-12-06 19:05:49.986969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.328 [2024-12-06 19:05:50.114844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.328 [2024-12-06 19:05:50.114896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:05.328 [2024-12-06 19:05:50.114899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.896 [2024-12-06 19:05:50.885827] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 96931 has claimed it. 00:07:05.896 request: 00:07:05.896 { 00:07:05.896 "method": "framework_enable_cpumask_locks", 00:07:05.896 "req_id": 1 00:07:05.896 } 00:07:05.896 Got JSON-RPC error response 00:07:05.896 response: 00:07:05.896 { 00:07:05.896 "code": -32603, 00:07:05.896 "message": "Failed to claim CPU core: 2" 00:07:05.896 } 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 96931 /var/tmp/spdk.sock 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 96931 ']' 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.896 19:05:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.155 19:05:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.155 19:05:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:06.155 19:05:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 96952 /var/tmp/spdk2.sock 00:07:06.155 19:05:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 96952 ']' 00:07:06.155 19:05:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.155 19:05:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.155 19:05:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.155 19:05:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.155 19:05:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.413 19:05:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.413 19:05:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:06.413 19:05:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:06.413 19:05:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:06.413 19:05:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:06.413 19:05:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:06.413 00:07:06.413 real 0m2.069s 00:07:06.413 user 0m1.120s 00:07:06.413 sys 0m0.190s 00:07:06.413 19:05:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.413 19:05:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.413 ************************************ 00:07:06.413 END TEST locking_overlapped_coremask_via_rpc 00:07:06.413 ************************************ 00:07:06.671 19:05:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:06.671 19:05:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 96931 ]] 00:07:06.671 19:05:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 96931 00:07:06.671 19:05:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 96931 ']' 00:07:06.671 19:05:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 96931 00:07:06.671 19:05:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:06.671 19:05:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.671 19:05:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96931 00:07:06.671 19:05:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.671 19:05:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.671 19:05:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96931' 00:07:06.671 killing process with pid 96931 00:07:06.671 19:05:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 96931 00:07:06.671 19:05:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 96931 00:07:06.930 19:05:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 96952 ]] 00:07:06.930 19:05:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 96952 00:07:06.930 19:05:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 96952 ']' 00:07:06.930 19:05:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 96952 00:07:06.930 19:05:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:06.930 19:05:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.930 19:05:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96952 00:07:06.930 19:05:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:06.930 19:05:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:06.931 19:05:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96952' 00:07:06.931 killing process with pid 96952 00:07:06.931 19:05:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 96952 00:07:06.931 19:05:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 96952 00:07:07.498 19:05:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:07.498 19:05:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:07.498 19:05:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 96931 ]] 00:07:07.498 19:05:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 96931 00:07:07.498 19:05:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 96931 ']' 00:07:07.498 19:05:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 96931 00:07:07.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (96931) - No such process 00:07:07.498 19:05:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 96931 is not found' 00:07:07.498 Process with pid 96931 is not found 00:07:07.498 19:05:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 96952 ]] 00:07:07.498 19:05:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 96952 00:07:07.498 19:05:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 96952 ']' 00:07:07.498 19:05:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 96952 00:07:07.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (96952) - No such process 00:07:07.498 19:05:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 96952 is not found' 00:07:07.498 Process with pid 96952 is not found 00:07:07.498 19:05:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:07.498 00:07:07.498 real 0m16.005s 00:07:07.498 user 0m29.104s 00:07:07.499 sys 0m5.277s 00:07:07.499 19:05:52 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.499 19:05:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.499 ************************************ 00:07:07.499 END TEST cpu_locks 00:07:07.499 ************************************ 00:07:07.499 00:07:07.499 real 0m40.718s 00:07:07.499 user 1m19.728s 00:07:07.499 sys 0m9.363s 00:07:07.499 19:05:52 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.499 19:05:52 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.499 ************************************ 00:07:07.499 END TEST event 00:07:07.499 ************************************ 00:07:07.499 19:05:52 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:07.499 19:05:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.499 19:05:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.499 19:05:52 -- common/autotest_common.sh@10 -- # set +x 00:07:07.499 ************************************ 00:07:07.499 START TEST thread 00:07:07.499 ************************************ 00:07:07.499 19:05:52 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:07.499 * Looking for test storage... 00:07:07.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:07.499 19:05:52 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:07.499 19:05:52 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:07.499 19:05:52 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:07.758 19:05:52 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:07.758 19:05:52 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.758 19:05:52 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.758 19:05:52 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.758 19:05:52 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.758 19:05:52 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.758 19:05:52 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.758 19:05:52 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.758 19:05:52 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.758 19:05:52 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.758 19:05:52 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.758 19:05:52 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.758 19:05:52 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:07.758 19:05:52 thread -- scripts/common.sh@345 -- # : 1 00:07:07.758 19:05:52 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.758 19:05:52 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.758 19:05:52 thread -- scripts/common.sh@365 -- # decimal 1 00:07:07.758 19:05:52 thread -- scripts/common.sh@353 -- # local d=1 00:07:07.758 19:05:52 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.758 19:05:52 thread -- scripts/common.sh@355 -- # echo 1 00:07:07.758 19:05:52 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.758 19:05:52 thread -- scripts/common.sh@366 -- # decimal 2 00:07:07.758 19:05:52 thread -- scripts/common.sh@353 -- # local d=2 00:07:07.758 19:05:52 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.758 19:05:52 thread -- scripts/common.sh@355 -- # echo 2 00:07:07.758 19:05:52 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.758 19:05:52 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.758 19:05:52 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.758 19:05:52 thread -- scripts/common.sh@368 -- # return 0 00:07:07.758 19:05:52 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.758 19:05:52 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:07.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.758 --rc genhtml_branch_coverage=1 00:07:07.758 --rc genhtml_function_coverage=1 00:07:07.758 --rc genhtml_legend=1 00:07:07.758 --rc geninfo_all_blocks=1 00:07:07.758 --rc geninfo_unexecuted_blocks=1 00:07:07.758 00:07:07.758 ' 00:07:07.758 19:05:52 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:07.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.758 --rc genhtml_branch_coverage=1 00:07:07.758 --rc genhtml_function_coverage=1 00:07:07.758 --rc genhtml_legend=1 00:07:07.758 --rc geninfo_all_blocks=1 00:07:07.758 --rc geninfo_unexecuted_blocks=1 00:07:07.758 00:07:07.758 ' 00:07:07.758 19:05:52 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:07.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.758 --rc genhtml_branch_coverage=1 00:07:07.758 --rc genhtml_function_coverage=1 00:07:07.758 --rc genhtml_legend=1 00:07:07.758 --rc geninfo_all_blocks=1 00:07:07.758 --rc geninfo_unexecuted_blocks=1 00:07:07.758 00:07:07.758 ' 00:07:07.758 19:05:52 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:07.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.758 --rc genhtml_branch_coverage=1 00:07:07.758 --rc genhtml_function_coverage=1 00:07:07.758 --rc genhtml_legend=1 00:07:07.758 --rc geninfo_all_blocks=1 00:07:07.758 --rc geninfo_unexecuted_blocks=1 00:07:07.758 00:07:07.758 ' 00:07:07.758 19:05:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:07.758 19:05:52 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:07.758 19:05:52 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.758 19:05:52 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.758 ************************************ 00:07:07.758 START TEST thread_poller_perf 00:07:07.758 ************************************ 00:07:07.758 19:05:52 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:07.758 [2024-12-06 19:05:52.671573] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:07:07.758 [2024-12-06 19:05:52.671641] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97438 ] 00:07:07.758 [2024-12-06 19:05:52.740136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.758 [2024-12-06 19:05:52.798027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.758 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:09.132 [2024-12-06T18:05:54.181Z] ====================================== 00:07:09.132 [2024-12-06T18:05:54.181Z] busy:2713312848 (cyc) 00:07:09.132 [2024-12-06T18:05:54.181Z] total_run_count: 365000 00:07:09.132 [2024-12-06T18:05:54.181Z] tsc_hz: 2700000000 (cyc) 00:07:09.132 [2024-12-06T18:05:54.181Z] ====================================== 00:07:09.132 [2024-12-06T18:05:54.181Z] poller_cost: 7433 (cyc), 2752 (nsec) 00:07:09.132 00:07:09.132 real 0m1.213s 00:07:09.132 user 0m1.140s 00:07:09.132 sys 0m0.067s 00:07:09.132 19:05:53 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.132 19:05:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:09.132 ************************************ 00:07:09.132 END TEST thread_poller_perf 00:07:09.132 ************************************ 00:07:09.132 19:05:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:09.132 19:05:53 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:09.132 19:05:53 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.132 19:05:53 thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.132 ************************************ 00:07:09.132 START TEST thread_poller_perf 00:07:09.132 ************************************ 00:07:09.132 19:05:53 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:09.132 [2024-12-06 19:05:53.931040] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:07:09.132 [2024-12-06 19:05:53.931108] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97601 ] 00:07:09.132 [2024-12-06 19:05:53.997519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.132 [2024-12-06 19:05:54.054143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.132 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:10.504 [2024-12-06T18:05:55.553Z] ====================================== 00:07:10.504 [2024-12-06T18:05:55.553Z] busy:2702395401 (cyc) 00:07:10.504 [2024-12-06T18:05:55.553Z] total_run_count: 4477000 00:07:10.504 [2024-12-06T18:05:55.553Z] tsc_hz: 2700000000 (cyc) 00:07:10.504 [2024-12-06T18:05:55.553Z] ====================================== 00:07:10.504 [2024-12-06T18:05:55.553Z] poller_cost: 603 (cyc), 223 (nsec) 00:07:10.504 00:07:10.504 real 0m1.201s 00:07:10.504 user 0m1.127s 00:07:10.504 sys 0m0.069s 00:07:10.504 19:05:55 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.504 19:05:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:10.504 ************************************ 00:07:10.504 END TEST thread_poller_perf 00:07:10.504 ************************************ 00:07:10.504 19:05:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:10.504 00:07:10.504 real 0m2.655s 00:07:10.504 user 0m2.395s 00:07:10.504 sys 0m0.264s 00:07:10.504 19:05:55 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.504 19:05:55 thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.504 ************************************ 00:07:10.504 END TEST thread 00:07:10.504 ************************************ 00:07:10.504 19:05:55 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:10.504 19:05:55 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:10.504 19:05:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.504 19:05:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.504 19:05:55 -- common/autotest_common.sh@10 -- # set +x 00:07:10.504 ************************************ 00:07:10.504 START TEST app_cmdline 00:07:10.504 ************************************ 00:07:10.504 19:05:55 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:10.504 * Looking for test storage... 00:07:10.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:10.504 19:05:55 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:10.504 19:05:55 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:10.504 19:05:55 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:10.504 19:05:55 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.504 19:05:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:10.504 19:05:55 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.504 19:05:55 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:10.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.504 --rc genhtml_branch_coverage=1 00:07:10.504 --rc genhtml_function_coverage=1 00:07:10.504 --rc genhtml_legend=1 00:07:10.504 --rc geninfo_all_blocks=1 00:07:10.504 --rc geninfo_unexecuted_blocks=1 00:07:10.504 00:07:10.504 ' 00:07:10.504 19:05:55 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:10.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.504 --rc genhtml_branch_coverage=1 00:07:10.504 --rc genhtml_function_coverage=1 00:07:10.504 --rc genhtml_legend=1 00:07:10.504 --rc geninfo_all_blocks=1 00:07:10.504 --rc geninfo_unexecuted_blocks=1 00:07:10.504 00:07:10.504 ' 00:07:10.504 19:05:55 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:10.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.504 --rc genhtml_branch_coverage=1 00:07:10.504 --rc genhtml_function_coverage=1 00:07:10.504 --rc genhtml_legend=1 00:07:10.504 --rc geninfo_all_blocks=1 00:07:10.504 --rc geninfo_unexecuted_blocks=1 00:07:10.504 00:07:10.504 ' 00:07:10.504 19:05:55 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:10.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.504 --rc genhtml_branch_coverage=1 00:07:10.504 --rc genhtml_function_coverage=1 00:07:10.504 --rc genhtml_legend=1 00:07:10.504 --rc geninfo_all_blocks=1 00:07:10.504 --rc geninfo_unexecuted_blocks=1 00:07:10.504 00:07:10.504 ' 00:07:10.504 19:05:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:10.504 19:05:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=97807 00:07:10.504 19:05:55 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:10.504 19:05:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 97807 00:07:10.504 19:05:55 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 97807 ']' 00:07:10.504 19:05:55 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.504 19:05:55 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.504 19:05:55 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.505 19:05:55 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.505 19:05:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:10.505 [2024-12-06 19:05:55.380898] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:07:10.505 [2024-12-06 19:05:55.380986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97807 ] 00:07:10.505 [2024-12-06 19:05:55.444835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.505 [2024-12-06 19:05:55.500886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.762 19:05:55 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.763 19:05:55 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:10.763 19:05:55 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:11.020 { 00:07:11.020 "version": "SPDK v25.01-pre git sha1 0787c2b4e", 00:07:11.020 "fields": { 00:07:11.020 "major": 25, 00:07:11.020 "minor": 1, 00:07:11.020 "patch": 0, 00:07:11.020 "suffix": "-pre", 00:07:11.020 "commit": "0787c2b4e" 00:07:11.020 } 00:07:11.020 } 00:07:11.020 19:05:56 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:11.020 19:05:56 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:11.020 19:05:56 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:11.020 19:05:56 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:11.020 19:05:56 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:11.020 19:05:56 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.020 19:05:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:11.020 19:05:56 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:11.020 19:05:56 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:11.020 19:05:56 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.020 19:05:56 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:11.020 19:05:56 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:11.020 19:05:56 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.020 19:05:56 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:11.020 19:05:56 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.020 19:05:56 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.021 19:05:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.021 19:05:56 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.021 19:05:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.021 19:05:56 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.021 19:05:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.021 19:05:56 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.021 19:05:56 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:11.021 19:05:56 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.278 request: 00:07:11.278 { 00:07:11.278 "method": "env_dpdk_get_mem_stats", 00:07:11.278 "req_id": 1 00:07:11.278 } 00:07:11.278 Got JSON-RPC error response 00:07:11.278 response: 00:07:11.278 { 00:07:11.278 "code": -32601, 00:07:11.278 "message": "Method not found" 00:07:11.278 } 00:07:11.536 19:05:56 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:11.536 19:05:56 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.536 19:05:56 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:11.536 19:05:56 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.536 19:05:56 app_cmdline -- app/cmdline.sh@1 -- # killprocess 97807 00:07:11.536 19:05:56 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 97807 ']' 00:07:11.536 19:05:56 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 97807 00:07:11.536 19:05:56 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:11.536 19:05:56 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.536 19:05:56 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97807 00:07:11.536 19:05:56 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.536 19:05:56 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.536 19:05:56 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97807' 00:07:11.536 killing process with pid 97807 00:07:11.536 19:05:56 app_cmdline -- common/autotest_common.sh@973 -- # kill 97807 00:07:11.536 19:05:56 app_cmdline -- common/autotest_common.sh@978 -- # wait 97807 00:07:11.796 00:07:11.796 real 0m1.591s 00:07:11.796 user 0m1.969s 00:07:11.796 sys 0m0.465s 00:07:11.796 19:05:56 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.796 19:05:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:11.796 ************************************ 00:07:11.796 END TEST app_cmdline 00:07:11.796 ************************************ 00:07:11.796 19:05:56 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:11.796 19:05:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.796 19:05:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.796 19:05:56 -- common/autotest_common.sh@10 -- # set +x 00:07:11.796 ************************************ 00:07:11.796 START TEST version 00:07:11.796 ************************************ 00:07:11.796 19:05:56 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:12.055 * Looking for test storage... 00:07:12.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:12.055 19:05:56 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:12.055 19:05:56 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:12.055 19:05:56 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:12.055 19:05:56 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:12.055 19:05:56 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.055 19:05:56 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.055 19:05:56 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.055 19:05:56 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.055 19:05:56 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.055 19:05:56 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.055 19:05:56 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.055 19:05:56 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.055 19:05:56 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.055 19:05:56 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.055 19:05:56 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.055 19:05:56 version -- scripts/common.sh@344 -- # case "$op" in 00:07:12.055 19:05:56 version -- scripts/common.sh@345 -- # : 1 00:07:12.055 19:05:56 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.055 19:05:56 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.055 19:05:56 version -- scripts/common.sh@365 -- # decimal 1 00:07:12.055 19:05:56 version -- scripts/common.sh@353 -- # local d=1 00:07:12.055 19:05:56 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.055 19:05:56 version -- scripts/common.sh@355 -- # echo 1 00:07:12.055 19:05:56 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.055 19:05:56 version -- scripts/common.sh@366 -- # decimal 2 00:07:12.055 19:05:56 version -- scripts/common.sh@353 -- # local d=2 00:07:12.055 19:05:56 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.055 19:05:56 version -- scripts/common.sh@355 -- # echo 2 00:07:12.055 19:05:56 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.055 19:05:56 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.055 19:05:56 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.055 19:05:56 version -- scripts/common.sh@368 -- # return 0 00:07:12.055 19:05:56 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.055 19:05:56 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:12.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.056 --rc genhtml_branch_coverage=1 00:07:12.056 --rc genhtml_function_coverage=1 00:07:12.056 --rc genhtml_legend=1 00:07:12.056 --rc geninfo_all_blocks=1 00:07:12.056 --rc geninfo_unexecuted_blocks=1 00:07:12.056 00:07:12.056 ' 00:07:12.056 19:05:56 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:12.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.056 --rc genhtml_branch_coverage=1 00:07:12.056 --rc genhtml_function_coverage=1 00:07:12.056 --rc genhtml_legend=1 00:07:12.056 --rc geninfo_all_blocks=1 00:07:12.056 --rc geninfo_unexecuted_blocks=1 00:07:12.056 00:07:12.056 ' 00:07:12.056 19:05:56 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:12.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.056 --rc genhtml_branch_coverage=1 00:07:12.056 --rc genhtml_function_coverage=1 00:07:12.056 --rc genhtml_legend=1 00:07:12.056 --rc geninfo_all_blocks=1 00:07:12.056 --rc geninfo_unexecuted_blocks=1 00:07:12.056 00:07:12.056 ' 00:07:12.056 19:05:56 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:12.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.056 --rc genhtml_branch_coverage=1 00:07:12.056 --rc genhtml_function_coverage=1 00:07:12.056 --rc genhtml_legend=1 00:07:12.056 --rc geninfo_all_blocks=1 00:07:12.056 --rc geninfo_unexecuted_blocks=1 00:07:12.056 00:07:12.056 ' 00:07:12.056 19:05:56 version -- app/version.sh@17 -- # get_header_version major 00:07:12.056 19:05:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:12.056 19:05:56 version -- app/version.sh@14 -- # cut -f2 00:07:12.056 19:05:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.056 19:05:56 version -- app/version.sh@17 -- # major=25 00:07:12.056 19:05:56 version -- app/version.sh@18 -- # get_header_version minor 00:07:12.056 19:05:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:12.056 19:05:56 version -- app/version.sh@14 -- # cut -f2 00:07:12.056 19:05:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.056 19:05:56 version -- app/version.sh@18 -- # minor=1 00:07:12.056 19:05:56 version -- app/version.sh@19 -- # get_header_version patch 00:07:12.056 19:05:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:12.056 19:05:56 version -- app/version.sh@14 -- # cut -f2 00:07:12.056 19:05:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.056 19:05:56 version -- app/version.sh@19 -- # patch=0 00:07:12.056 19:05:56 version -- app/version.sh@20 -- # get_header_version suffix 00:07:12.056 19:05:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:12.056 19:05:56 version -- app/version.sh@14 -- # cut -f2 00:07:12.056 19:05:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.056 19:05:57 version -- app/version.sh@20 -- # suffix=-pre 00:07:12.056 19:05:57 version -- app/version.sh@22 -- # version=25.1 00:07:12.056 19:05:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:12.056 19:05:57 version -- app/version.sh@28 -- # version=25.1rc0 00:07:12.056 19:05:57 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:12.056 19:05:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:12.056 19:05:57 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:12.056 19:05:57 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:12.056 00:07:12.056 real 0m0.205s 00:07:12.056 user 0m0.138s 00:07:12.056 sys 0m0.092s 00:07:12.056 19:05:57 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.056 19:05:57 version -- common/autotest_common.sh@10 -- # set +x 00:07:12.056 ************************************ 00:07:12.056 END TEST version 00:07:12.056 ************************************ 00:07:12.056 19:05:57 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:12.056 19:05:57 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:12.056 19:05:57 -- spdk/autotest.sh@194 -- # uname -s 00:07:12.056 19:05:57 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:12.056 19:05:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:12.056 19:05:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:12.056 19:05:57 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:12.056 19:05:57 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:12.056 19:05:57 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:12.056 19:05:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:12.056 19:05:57 -- common/autotest_common.sh@10 -- # set +x 00:07:12.056 19:05:57 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:12.056 19:05:57 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:12.056 19:05:57 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:12.056 19:05:57 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:12.056 19:05:57 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:12.056 19:05:57 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:12.056 19:05:57 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:12.056 19:05:57 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:12.056 19:05:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.056 19:05:57 -- common/autotest_common.sh@10 -- # set +x 00:07:12.315 ************************************ 00:07:12.315 START TEST nvmf_tcp 00:07:12.315 ************************************ 00:07:12.315 19:05:57 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:12.315 * Looking for test storage... 00:07:12.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:12.315 19:05:57 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:12.315 19:05:57 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:12.315 19:05:57 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:12.315 19:05:57 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.315 19:05:57 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:12.315 19:05:57 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.315 19:05:57 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:12.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.315 --rc genhtml_branch_coverage=1 00:07:12.315 --rc genhtml_function_coverage=1 00:07:12.315 --rc genhtml_legend=1 00:07:12.315 --rc geninfo_all_blocks=1 00:07:12.315 --rc geninfo_unexecuted_blocks=1 00:07:12.315 00:07:12.315 ' 00:07:12.315 19:05:57 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:12.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.315 --rc genhtml_branch_coverage=1 00:07:12.315 --rc genhtml_function_coverage=1 00:07:12.315 --rc genhtml_legend=1 00:07:12.315 --rc geninfo_all_blocks=1 00:07:12.315 --rc geninfo_unexecuted_blocks=1 00:07:12.315 00:07:12.315 ' 00:07:12.315 19:05:57 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:12.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.315 --rc genhtml_branch_coverage=1 00:07:12.315 --rc genhtml_function_coverage=1 00:07:12.315 --rc genhtml_legend=1 00:07:12.315 --rc geninfo_all_blocks=1 00:07:12.315 --rc geninfo_unexecuted_blocks=1 00:07:12.315 00:07:12.315 ' 00:07:12.315 19:05:57 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:12.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.315 --rc genhtml_branch_coverage=1 00:07:12.315 --rc genhtml_function_coverage=1 00:07:12.315 --rc genhtml_legend=1 00:07:12.315 --rc geninfo_all_blocks=1 00:07:12.315 --rc geninfo_unexecuted_blocks=1 00:07:12.315 00:07:12.315 ' 00:07:12.315 19:05:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:12.315 19:05:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:12.315 19:05:57 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:12.315 19:05:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:12.315 19:05:57 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.315 19:05:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:12.315 ************************************ 00:07:12.315 START TEST nvmf_target_core 00:07:12.315 ************************************ 00:07:12.315 19:05:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:12.315 * Looking for test storage... 00:07:12.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:12.315 19:05:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:12.315 19:05:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:07:12.315 19:05:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:12.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.576 --rc genhtml_branch_coverage=1 00:07:12.576 --rc genhtml_function_coverage=1 00:07:12.576 --rc genhtml_legend=1 00:07:12.576 --rc geninfo_all_blocks=1 00:07:12.576 --rc geninfo_unexecuted_blocks=1 00:07:12.576 00:07:12.576 ' 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:12.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.576 --rc genhtml_branch_coverage=1 00:07:12.576 --rc genhtml_function_coverage=1 00:07:12.576 --rc genhtml_legend=1 00:07:12.576 --rc geninfo_all_blocks=1 00:07:12.576 --rc geninfo_unexecuted_blocks=1 00:07:12.576 00:07:12.576 ' 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:12.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.576 --rc genhtml_branch_coverage=1 00:07:12.576 --rc genhtml_function_coverage=1 00:07:12.576 --rc genhtml_legend=1 00:07:12.576 --rc geninfo_all_blocks=1 00:07:12.576 --rc geninfo_unexecuted_blocks=1 00:07:12.576 00:07:12.576 ' 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:12.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.576 --rc genhtml_branch_coverage=1 00:07:12.576 --rc genhtml_function_coverage=1 00:07:12.576 --rc genhtml_legend=1 00:07:12.576 --rc geninfo_all_blocks=1 00:07:12.576 --rc geninfo_unexecuted_blocks=1 00:07:12.576 00:07:12.576 ' 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.576 19:05:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:12.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:12.577 ************************************ 00:07:12.577 START TEST nvmf_abort 00:07:12.577 ************************************ 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:12.577 * Looking for test storage... 00:07:12.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:12.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.577 --rc genhtml_branch_coverage=1 00:07:12.577 --rc genhtml_function_coverage=1 00:07:12.577 --rc genhtml_legend=1 00:07:12.577 --rc geninfo_all_blocks=1 00:07:12.577 --rc geninfo_unexecuted_blocks=1 00:07:12.577 00:07:12.577 ' 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:12.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.577 --rc genhtml_branch_coverage=1 00:07:12.577 --rc genhtml_function_coverage=1 00:07:12.577 --rc genhtml_legend=1 00:07:12.577 --rc geninfo_all_blocks=1 00:07:12.577 --rc geninfo_unexecuted_blocks=1 00:07:12.577 00:07:12.577 ' 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:12.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.577 --rc genhtml_branch_coverage=1 00:07:12.577 --rc genhtml_function_coverage=1 00:07:12.577 --rc genhtml_legend=1 00:07:12.577 --rc geninfo_all_blocks=1 00:07:12.577 --rc geninfo_unexecuted_blocks=1 00:07:12.577 00:07:12.577 ' 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:12.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.577 --rc genhtml_branch_coverage=1 00:07:12.577 --rc genhtml_function_coverage=1 00:07:12.577 --rc genhtml_legend=1 00:07:12.577 --rc geninfo_all_blocks=1 00:07:12.577 --rc geninfo_unexecuted_blocks=1 00:07:12.577 00:07:12.577 ' 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.577 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:12.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.578 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.838 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:12.838 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:12.838 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:12.838 19:05:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:14.738 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:14.997 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:14.997 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:14.997 Found net devices under 0000:84:00.0: cvl_0_0 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:14.997 Found net devices under 0000:84:00.1: cvl_0_1 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:14.997 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:14.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:07:14.998 00:07:14.998 --- 10.0.0.2 ping statistics --- 00:07:14.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.998 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:14.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:07:14.998 00:07:14.998 --- 10.0.0.1 ping statistics --- 00:07:14.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.998 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.998 19:05:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:14.998 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=99919 00:07:14.998 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:14.998 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 99919 00:07:14.998 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 99919 ']' 00:07:14.998 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.998 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.998 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.998 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.998 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.256 [2024-12-06 19:06:00.057440] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:07:15.256 [2024-12-06 19:06:00.057518] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.256 [2024-12-06 19:06:00.132566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.256 [2024-12-06 19:06:00.194774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.256 [2024-12-06 19:06:00.194853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.256 [2024-12-06 19:06:00.194867] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.256 [2024-12-06 19:06:00.194878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.256 [2024-12-06 19:06:00.194892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.256 [2024-12-06 19:06:00.196349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.256 [2024-12-06 19:06:00.196413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.256 [2024-12-06 19:06:00.196416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.515 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.515 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:15.515 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:15.515 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:15.515 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.516 [2024-12-06 19:06:00.344576] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.516 Malloc0 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.516 Delay0 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.516 [2024-12-06 19:06:00.416094] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.516 19:06:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:15.516 [2024-12-06 19:06:00.531563] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:18.048 Initializing NVMe Controllers 00:07:18.048 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:18.048 controller IO queue size 128 less than required 00:07:18.048 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:18.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:18.048 Initialization complete. Launching workers. 00:07:18.048 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30241 00:07:18.048 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30302, failed to submit 62 00:07:18.048 success 30245, unsuccessful 57, failed 0 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:18.048 rmmod nvme_tcp 00:07:18.048 rmmod nvme_fabrics 00:07:18.048 rmmod nvme_keyring 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 99919 ']' 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 99919 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 99919 ']' 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 99919 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99919 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99919' 00:07:18.048 killing process with pid 99919 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 99919 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 99919 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.048 19:06:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.963 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:19.963 00:07:19.963 real 0m7.542s 00:07:19.963 user 0m10.869s 00:07:19.963 sys 0m2.511s 00:07:19.963 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.963 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:19.963 ************************************ 00:07:19.963 END TEST nvmf_abort 00:07:19.963 ************************************ 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.223 ************************************ 00:07:20.223 START TEST nvmf_ns_hotplug_stress 00:07:20.223 ************************************ 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:20.223 * Looking for test storage... 00:07:20.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:20.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.223 --rc genhtml_branch_coverage=1 00:07:20.223 --rc genhtml_function_coverage=1 00:07:20.223 --rc genhtml_legend=1 00:07:20.223 --rc geninfo_all_blocks=1 00:07:20.223 --rc geninfo_unexecuted_blocks=1 00:07:20.223 00:07:20.223 ' 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:20.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.223 --rc genhtml_branch_coverage=1 00:07:20.223 --rc genhtml_function_coverage=1 00:07:20.223 --rc genhtml_legend=1 00:07:20.223 --rc geninfo_all_blocks=1 00:07:20.223 --rc geninfo_unexecuted_blocks=1 00:07:20.223 00:07:20.223 ' 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:20.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.223 --rc genhtml_branch_coverage=1 00:07:20.223 --rc genhtml_function_coverage=1 00:07:20.223 --rc genhtml_legend=1 00:07:20.223 --rc geninfo_all_blocks=1 00:07:20.223 --rc geninfo_unexecuted_blocks=1 00:07:20.223 00:07:20.223 ' 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:20.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.223 --rc genhtml_branch_coverage=1 00:07:20.223 --rc genhtml_function_coverage=1 00:07:20.223 --rc genhtml_legend=1 00:07:20.223 --rc geninfo_all_blocks=1 00:07:20.223 --rc geninfo_unexecuted_blocks=1 00:07:20.223 00:07:20.223 ' 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.223 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:20.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:20.224 19:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.758 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:22.759 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:22.759 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:22.759 Found net devices under 0000:84:00.0: cvl_0_0 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:22.759 Found net devices under 0000:84:00.1: cvl_0_1 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:22.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:07:22.759 00:07:22.759 --- 10.0.0.2 ping statistics --- 00:07:22.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.759 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:22.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:07:22.759 00:07:22.759 --- 10.0.0.1 ping statistics --- 00:07:22.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.759 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=102551 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 102551 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 102551 ']' 00:07:22.759 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.760 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.760 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.760 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.760 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:22.760 [2024-12-06 19:06:07.587394] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:07:22.760 [2024-12-06 19:06:07.587489] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.760 [2024-12-06 19:06:07.661364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.760 [2024-12-06 19:06:07.718389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.760 [2024-12-06 19:06:07.718453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.760 [2024-12-06 19:06:07.718477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.760 [2024-12-06 19:06:07.718488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.760 [2024-12-06 19:06:07.718497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.760 [2024-12-06 19:06:07.720114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.760 [2024-12-06 19:06:07.720176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.760 [2024-12-06 19:06:07.720179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.017 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.017 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:23.017 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:23.017 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:23.017 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:23.017 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.017 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:23.017 19:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:23.275 [2024-12-06 19:06:08.132088] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.275 19:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:23.533 19:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.790 [2024-12-06 19:06:08.714894] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.790 19:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:24.047 19:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:24.305 Malloc0 00:07:24.305 19:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:24.563 Delay0 00:07:24.563 19:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.821 19:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:25.078 NULL1 00:07:25.335 19:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:25.593 19:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=103203 00:07:25.593 19:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:25.593 19:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.593 19:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:26.525 Read completed with error (sct=0, sc=11) 00:07:26.525 19:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.040 19:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:27.040 19:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:27.298 true 00:07:27.298 19:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:27.298 19:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.887 19:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.207 19:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:28.207 19:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:28.495 true 00:07:28.495 19:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:28.495 19:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.762 19:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.106 19:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:29.106 19:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:29.424 true 00:07:29.424 19:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:29.424 19:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.744 19:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.047 19:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:30.047 19:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:30.344 true 00:07:30.344 19:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:30.344 19:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.014 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.014 19:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.014 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.309 19:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:31.309 19:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:31.594 true 00:07:31.594 19:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:31.594 19:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.853 19:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.111 19:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:32.111 19:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:32.370 true 00:07:32.628 19:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:32.629 19:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.197 19:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.456 19:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:33.456 19:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:33.714 true 00:07:33.715 19:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:33.715 19:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.973 19:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.231 19:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:34.231 19:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:34.490 true 00:07:34.490 19:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:34.490 19:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.425 19:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.683 19:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:35.683 19:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:35.941 true 00:07:35.941 19:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:35.941 19:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.198 19:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.456 19:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:36.456 19:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:36.714 true 00:07:36.714 19:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:36.714 19:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.645 19:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.901 19:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:37.901 19:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:38.159 true 00:07:38.159 19:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:38.159 19:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.416 19:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.673 19:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:38.673 19:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:38.931 true 00:07:38.931 19:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:38.931 19:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.864 19:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.132 19:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:40.132 19:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:40.391 true 00:07:40.391 19:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:40.391 19:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.648 19:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.905 19:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:40.905 19:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:41.162 true 00:07:41.162 19:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:41.163 19:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.420 19:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.678 19:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:41.678 19:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:41.936 true 00:07:41.936 19:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:41.936 19:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.311 19:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.311 19:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:43.311 19:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:43.570 true 00:07:43.570 19:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:43.570 19:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.830 19:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.096 19:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:44.096 19:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:44.354 true 00:07:44.354 19:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:44.354 19:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.612 19:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.872 19:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:44.872 19:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:45.131 true 00:07:45.131 19:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:45.131 19:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.508 19:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.508 19:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:46.508 19:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:46.767 true 00:07:46.767 19:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:46.767 19:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.026 19:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.285 19:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:47.285 19:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:47.543 true 00:07:47.543 19:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:47.543 19:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.802 19:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.061 19:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:48.061 19:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:48.319 true 00:07:48.319 19:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:48.319 19:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.252 19:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.511 19:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:49.511 19:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:49.770 true 00:07:49.770 19:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:49.770 19:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.028 19:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.344 19:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:50.344 19:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:50.601 true 00:07:50.601 19:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:50.601 19:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.858 19:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.115 19:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:51.115 19:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:51.373 true 00:07:51.632 19:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:51.632 19:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.566 19:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.824 19:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:52.824 19:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:53.082 true 00:07:53.082 19:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:53.082 19:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.341 19:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.600 19:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:53.600 19:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:53.859 true 00:07:53.859 19:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:53.859 19:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.118 19:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.376 19:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:54.376 19:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:54.634 true 00:07:54.634 19:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:54.634 19:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.012 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.012 Initializing NVMe Controllers 00:07:56.012 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:56.012 Controller IO queue size 128, less than required. 00:07:56.012 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:56.012 Controller IO queue size 128, less than required. 00:07:56.012 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:56.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:56.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:56.012 Initialization complete. Launching workers. 00:07:56.012 ======================================================== 00:07:56.012 Latency(us) 00:07:56.012 Device Information : IOPS MiB/s Average min max 00:07:56.012 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 727.46 0.36 78768.84 2134.69 1056321.65 00:07:56.012 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9567.79 4.67 13378.34 3144.67 447209.04 00:07:56.012 ======================================================== 00:07:56.012 Total : 10295.26 5.03 17998.84 2134.69 1056321.65 00:07:56.012 00:07:56.012 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:56.012 19:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:56.271 true 00:07:56.271 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 103203 00:07:56.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (103203) - No such process 00:07:56.271 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 103203 00:07:56.271 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.529 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.787 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:56.787 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:56.787 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:56.787 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:56.787 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:57.045 null0 00:07:57.045 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:57.045 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:57.045 19:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:57.303 null1 00:07:57.303 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:57.303 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:57.303 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:57.561 null2 00:07:57.561 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:57.561 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:57.561 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:57.819 null3 00:07:57.819 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:57.819 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:57.819 19:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:58.076 null4 00:07:58.076 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:58.076 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:58.076 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:58.334 null5 00:07:58.334 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:58.334 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:58.334 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:58.592 null6 00:07:58.851 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:58.851 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:58.851 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:59.110 null7 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:59.110 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 107414 107415 107417 107419 107421 107423 107425 107427 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.111 19:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:59.369 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.369 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:59.369 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.369 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.369 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:59.369 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:59.369 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:59.369 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:59.628 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.628 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.628 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:59.628 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.628 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.628 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:59.628 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.628 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.628 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:59.628 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.628 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.628 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:59.629 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.629 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.629 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:59.629 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.629 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.629 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:59.629 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.629 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.629 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:59.629 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.629 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.629 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:59.888 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.888 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.888 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:59.888 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.888 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:59.888 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:59.888 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:59.888 19:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.147 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:00.406 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.406 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:00.406 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.406 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:00.406 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:00.406 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:00.406 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:00.664 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:00.922 19:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.182 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.182 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.182 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.182 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.182 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.182 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.182 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.182 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.442 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.701 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.701 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.701 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.701 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.701 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.701 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.701 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.701 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.961 19:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.221 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.221 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.479 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.479 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.479 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.479 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.479 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.479 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.738 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.996 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.996 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.996 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.996 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.996 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.996 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.996 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.996 19:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.254 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.513 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.513 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.513 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.513 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.513 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.513 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.513 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.513 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.772 19:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.360 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.360 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.360 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.360 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.360 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.360 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.360 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.360 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.360 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.360 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.360 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.361 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.361 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.361 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.619 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.878 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.878 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.878 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.878 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.878 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.878 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.878 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.878 19:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:05.137 rmmod nvme_tcp 00:08:05.137 rmmod nvme_fabrics 00:08:05.137 rmmod nvme_keyring 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 102551 ']' 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 102551 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 102551 ']' 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 102551 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102551 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102551' 00:08:05.137 killing process with pid 102551 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 102551 00:08:05.137 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 102551 00:08:05.396 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:05.396 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:05.396 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:05.396 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:05.396 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:05.396 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:05.396 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:05.396 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:05.396 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:05.396 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.396 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.396 19:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:07.958 00:08:07.958 real 0m47.389s 00:08:07.958 user 3m40.554s 00:08:07.958 sys 0m16.279s 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:07.958 ************************************ 00:08:07.958 END TEST nvmf_ns_hotplug_stress 00:08:07.958 ************************************ 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.958 ************************************ 00:08:07.958 START TEST nvmf_delete_subsystem 00:08:07.958 ************************************ 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:07.958 * Looking for test storage... 00:08:07.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:07.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.958 --rc genhtml_branch_coverage=1 00:08:07.958 --rc genhtml_function_coverage=1 00:08:07.958 --rc genhtml_legend=1 00:08:07.958 --rc geninfo_all_blocks=1 00:08:07.958 --rc geninfo_unexecuted_blocks=1 00:08:07.958 00:08:07.958 ' 00:08:07.958 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:07.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.958 --rc genhtml_branch_coverage=1 00:08:07.958 --rc genhtml_function_coverage=1 00:08:07.958 --rc genhtml_legend=1 00:08:07.958 --rc geninfo_all_blocks=1 00:08:07.958 --rc geninfo_unexecuted_blocks=1 00:08:07.958 00:08:07.959 ' 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:07.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.959 --rc genhtml_branch_coverage=1 00:08:07.959 --rc genhtml_function_coverage=1 00:08:07.959 --rc genhtml_legend=1 00:08:07.959 --rc geninfo_all_blocks=1 00:08:07.959 --rc geninfo_unexecuted_blocks=1 00:08:07.959 00:08:07.959 ' 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:07.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.959 --rc genhtml_branch_coverage=1 00:08:07.959 --rc genhtml_function_coverage=1 00:08:07.959 --rc genhtml_legend=1 00:08:07.959 --rc geninfo_all_blocks=1 00:08:07.959 --rc geninfo_unexecuted_blocks=1 00:08:07.959 00:08:07.959 ' 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:07.959 19:06:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:09.876 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:09.876 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:09.876 Found net devices under 0000:84:00.0: cvl_0_0 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:09.876 Found net devices under 0000:84:00.1: cvl_0_1 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.876 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:09.877 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.877 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.877 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:09.877 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:09.877 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.877 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.877 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:09.877 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:09.877 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.877 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.877 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.877 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.877 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:09.877 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:10.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:08:10.138 00:08:10.138 --- 10.0.0.2 ping statistics --- 00:08:10.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.138 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:08:10.138 00:08:10.138 --- 10.0.0.1 ping statistics --- 00:08:10.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.138 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=110225 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 110225 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 110225 ']' 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.138 19:06:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.138 [2024-12-06 19:06:55.031609] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:08:10.138 [2024-12-06 19:06:55.031717] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.138 [2024-12-06 19:06:55.103052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:10.138 [2024-12-06 19:06:55.159145] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.138 [2024-12-06 19:06:55.159210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.138 [2024-12-06 19:06:55.159231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.138 [2024-12-06 19:06:55.159243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.138 [2024-12-06 19:06:55.159252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.138 [2024-12-06 19:06:55.161045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.138 [2024-12-06 19:06:55.161098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.415 [2024-12-06 19:06:55.294115] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.415 [2024-12-06 19:06:55.310375] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.415 NULL1 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.415 Delay0 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=110359 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:10.415 19:06:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:10.415 [2024-12-06 19:06:55.395180] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:12.314 19:06:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.314 19:06:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.314 19:06:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.572 Write completed with error (sct=0, sc=8) 00:08:12.572 starting I/O failed: -6 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Write completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 starting I/O failed: -6 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 starting I/O failed: -6 00:08:12.572 Write completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Write completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 starting I/O failed: -6 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Write completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 starting I/O failed: -6 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Write completed with error (sct=0, sc=8) 00:08:12.572 Write completed with error (sct=0, sc=8) 00:08:12.572 starting I/O failed: -6 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Write completed with error (sct=0, sc=8) 00:08:12.572 Write completed with error (sct=0, sc=8) 00:08:12.572 starting I/O failed: -6 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Write completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 starting I/O failed: -6 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 starting I/O failed: -6 00:08:12.572 Write completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 starting I/O failed: -6 00:08:12.572 Read completed with error (sct=0, sc=8) 00:08:12.572 starting I/O failed: -6 00:08:12.572 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 starting I/O failed: -6 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 starting I/O failed: -6 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 starting I/O failed: -6 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 starting I/O failed: -6 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 starting I/O failed: -6 00:08:12.573 starting I/O failed: -6 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 starting I/O failed: -6 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 starting I/O failed: -6 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 starting I/O failed: -6 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 starting I/O failed: -6 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 starting I/O failed: -6 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 starting I/O failed: -6 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 starting I/O failed: -6 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 starting I/O failed: -6 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 starting I/O failed: -6 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 [2024-12-06 19:06:57.567543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fed44000c40 is same with the state(6) to be set 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Write completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.573 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Write completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Write completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Write completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Write completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Write completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Write completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Write completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Write completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Write completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Write completed with error (sct=0, sc=8) 00:08:12.574 Write completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:12.574 Write completed with error (sct=0, sc=8) 00:08:12.574 Read completed with error (sct=0, sc=8) 00:08:13.509 [2024-12-06 19:06:58.532398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cae9b0 is same with the state(6) to be set 00:08:13.767 Read completed with error (sct=0, sc=8) 00:08:13.767 Write completed with error (sct=0, sc=8) 00:08:13.767 Read completed with error (sct=0, sc=8) 00:08:13.767 Write completed with error (sct=0, sc=8) 00:08:13.767 Read completed with error (sct=0, sc=8) 00:08:13.767 Read completed with error (sct=0, sc=8) 00:08:13.767 Read completed with error (sct=0, sc=8) 00:08:13.767 Read completed with error (sct=0, sc=8) 00:08:13.767 Read completed with error (sct=0, sc=8) 00:08:13.767 Read completed with error (sct=0, sc=8) 00:08:13.767 Read completed with error (sct=0, sc=8) 00:08:13.767 Write completed with error (sct=0, sc=8) 00:08:13.767 Write completed with error (sct=0, sc=8) 00:08:13.767 Read completed with error (sct=0, sc=8) 00:08:13.767 Read completed with error (sct=0, sc=8) 00:08:13.767 Read completed with error (sct=0, sc=8) 00:08:13.767 Read completed with error (sct=0, sc=8) 00:08:13.767 Read completed with error (sct=0, sc=8) 00:08:13.767 Read completed with error (sct=0, sc=8) 00:08:13.767 Write completed with error (sct=0, sc=8) 00:08:13.768 [2024-12-06 19:06:58.569751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fed4400d7e0 is same with the state(6) to be set 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 [2024-12-06 19:06:58.569967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cad680 is same with the state(6) to be set 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 [2024-12-06 19:06:58.570176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cad2c0 is same with the state(6) to be set 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Write completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 Read completed with error (sct=0, sc=8) 00:08:13.768 [2024-12-06 19:06:58.570841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fed4400d020 is same with the state(6) to be set 00:08:13.768 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.768 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:13.768 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 110359 00:08:13.768 19:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:13.768 Initializing NVMe Controllers 00:08:13.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:13.768 Controller IO queue size 128, less than required. 00:08:13.768 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:13.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:13.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:13.768 Initialization complete. Launching workers. 00:08:13.768 ======================================================== 00:08:13.768 Latency(us) 00:08:13.768 Device Information : IOPS MiB/s Average min max 00:08:13.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.13 0.08 893839.47 582.77 1012654.95 00:08:13.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.21 0.08 914326.42 573.09 1012832.06 00:08:13.768 ======================================================== 00:08:13.768 Total : 335.34 0.16 903749.58 573.09 1012832.06 00:08:13.768 00:08:13.768 [2024-12-06 19:06:58.571807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cae9b0 (9): Bad file descriptor 00:08:13.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:14.027 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:14.027 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 110359 00:08:14.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (110359) - No such process 00:08:14.027 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 110359 00:08:14.027 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:14.027 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 110359 00:08:14.027 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:14.027 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.027 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 110359 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.286 [2024-12-06 19:06:59.092256] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=110767 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 110767 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:14.286 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:14.286 [2024-12-06 19:06:59.157534] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:14.854 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:14.854 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 110767 00:08:14.854 19:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.113 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.113 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 110767 00:08:15.113 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.680 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.680 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 110767 00:08:15.680 19:07:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.245 19:07:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.245 19:07:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 110767 00:08:16.245 19:07:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.809 19:07:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.809 19:07:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 110767 00:08:16.809 19:07:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.375 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.375 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 110767 00:08:17.375 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.375 Initializing NVMe Controllers 00:08:17.375 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:17.375 Controller IO queue size 128, less than required. 00:08:17.375 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:17.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:17.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:17.375 Initialization complete. Launching workers. 00:08:17.375 ======================================================== 00:08:17.375 Latency(us) 00:08:17.375 Device Information : IOPS MiB/s Average min max 00:08:17.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003687.91 1000189.83 1042626.02 00:08:17.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004655.02 1000230.06 1012880.04 00:08:17.375 ======================================================== 00:08:17.375 Total : 256.00 0.12 1004171.46 1000189.83 1042626.02 00:08:17.375 00:08:17.633 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.633 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 110767 00:08:17.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (110767) - No such process 00:08:17.633 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 110767 00:08:17.633 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:17.633 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:17.633 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:17.633 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:17.633 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:17.633 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:17.633 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:17.633 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:17.633 rmmod nvme_tcp 00:08:17.633 rmmod nvme_fabrics 00:08:17.633 rmmod nvme_keyring 00:08:17.634 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:17.634 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:17.634 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:17.634 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 110225 ']' 00:08:17.634 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 110225 00:08:17.634 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 110225 ']' 00:08:17.634 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 110225 00:08:17.634 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:17.634 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.634 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110225 00:08:17.894 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.894 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.894 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110225' 00:08:17.894 killing process with pid 110225 00:08:17.894 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 110225 00:08:17.894 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 110225 00:08:17.894 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:17.894 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:17.894 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:17.894 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:17.894 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:17.894 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:17.894 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:17.894 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:17.894 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:17.894 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.894 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.894 19:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.436 19:07:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:20.436 00:08:20.436 real 0m12.471s 00:08:20.436 user 0m27.958s 00:08:20.436 sys 0m3.039s 00:08:20.436 19:07:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.436 19:07:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.436 ************************************ 00:08:20.436 END TEST nvmf_delete_subsystem 00:08:20.436 ************************************ 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.436 ************************************ 00:08:20.436 START TEST nvmf_host_management 00:08:20.436 ************************************ 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:20.436 * Looking for test storage... 00:08:20.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:20.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.436 --rc genhtml_branch_coverage=1 00:08:20.436 --rc genhtml_function_coverage=1 00:08:20.436 --rc genhtml_legend=1 00:08:20.436 --rc geninfo_all_blocks=1 00:08:20.436 --rc geninfo_unexecuted_blocks=1 00:08:20.436 00:08:20.436 ' 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:20.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.436 --rc genhtml_branch_coverage=1 00:08:20.436 --rc genhtml_function_coverage=1 00:08:20.436 --rc genhtml_legend=1 00:08:20.436 --rc geninfo_all_blocks=1 00:08:20.436 --rc geninfo_unexecuted_blocks=1 00:08:20.436 00:08:20.436 ' 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:20.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.436 --rc genhtml_branch_coverage=1 00:08:20.436 --rc genhtml_function_coverage=1 00:08:20.436 --rc genhtml_legend=1 00:08:20.436 --rc geninfo_all_blocks=1 00:08:20.436 --rc geninfo_unexecuted_blocks=1 00:08:20.436 00:08:20.436 ' 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:20.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.436 --rc genhtml_branch_coverage=1 00:08:20.436 --rc genhtml_function_coverage=1 00:08:20.436 --rc genhtml_legend=1 00:08:20.436 --rc geninfo_all_blocks=1 00:08:20.436 --rc geninfo_unexecuted_blocks=1 00:08:20.436 00:08:20.436 ' 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.436 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:20.437 19:07:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:22.979 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:22.979 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:22.980 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:22.980 Found net devices under 0000:84:00.0: cvl_0_0 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:22.980 Found net devices under 0000:84:00.1: cvl_0_1 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:22.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:08:22.980 00:08:22.980 --- 10.0.0.2 ping statistics --- 00:08:22.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.980 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:08:22.980 00:08:22.980 --- 10.0.0.1 ping statistics --- 00:08:22.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.980 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=113183 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 113183 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 113183 ']' 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.980 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.980 [2024-12-06 19:07:07.621387] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:08:22.980 [2024-12-06 19:07:07.621482] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.980 [2024-12-06 19:07:07.692395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.980 [2024-12-06 19:07:07.748387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.981 [2024-12-06 19:07:07.748449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.981 [2024-12-06 19:07:07.748476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.981 [2024-12-06 19:07:07.748487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.981 [2024-12-06 19:07:07.748496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.981 [2024-12-06 19:07:07.750245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.981 [2024-12-06 19:07:07.750307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.981 [2024-12-06 19:07:07.750380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.981 [2024-12-06 19:07:07.750377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.981 [2024-12-06 19:07:07.891372] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.981 Malloc0 00:08:22.981 [2024-12-06 19:07:07.966458] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:22.981 19:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=113308 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 113308 /var/tmp/bdevperf.sock 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 113308 ']' 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:22.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:22.981 { 00:08:22.981 "params": { 00:08:22.981 "name": "Nvme$subsystem", 00:08:22.981 "trtype": "$TEST_TRANSPORT", 00:08:22.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:22.981 "adrfam": "ipv4", 00:08:22.981 "trsvcid": "$NVMF_PORT", 00:08:22.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:22.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:22.981 "hdgst": ${hdgst:-false}, 00:08:22.981 "ddgst": ${ddgst:-false} 00:08:22.981 }, 00:08:22.981 "method": "bdev_nvme_attach_controller" 00:08:22.981 } 00:08:22.981 EOF 00:08:22.981 )") 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:22.981 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:22.981 "params": { 00:08:22.981 "name": "Nvme0", 00:08:22.981 "trtype": "tcp", 00:08:22.981 "traddr": "10.0.0.2", 00:08:22.981 "adrfam": "ipv4", 00:08:22.981 "trsvcid": "4420", 00:08:22.981 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:22.981 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:22.981 "hdgst": false, 00:08:22.981 "ddgst": false 00:08:22.981 }, 00:08:22.981 "method": "bdev_nvme_attach_controller" 00:08:22.981 }' 00:08:23.241 [2024-12-06 19:07:08.051092] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:08:23.241 [2024-12-06 19:07:08.051164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113308 ] 00:08:23.241 [2024-12-06 19:07:08.120844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.241 [2024-12-06 19:07:08.180328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.499 Running I/O for 10 seconds... 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:23.758 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:24.020 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:24.020 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:24.020 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:24.020 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:24.020 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.020 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.020 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.020 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=555 00:08:24.020 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 555 -ge 100 ']' 00:08:24.020 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:24.020 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:24.020 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:24.020 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:24.020 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.020 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.020 [2024-12-06 19:07:08.929679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.020 [2024-12-06 19:07:08.929784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.020 [2024-12-06 19:07:08.929802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.020 [2024-12-06 19:07:08.929816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.020 [2024-12-06 19:07:08.929829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.020 [2024-12-06 19:07:08.929841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.020 [2024-12-06 19:07:08.929853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.020 [2024-12-06 19:07:08.929865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.020 [2024-12-06 19:07:08.929877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.020 [2024-12-06 19:07:08.929889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.020 [2024-12-06 19:07:08.929901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.021 [2024-12-06 19:07:08.929913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.021 [2024-12-06 19:07:08.929926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.021 [2024-12-06 19:07:08.929939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.021 [2024-12-06 19:07:08.929951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.021 [2024-12-06 19:07:08.929963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.021 [2024-12-06 19:07:08.929975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.021 [2024-12-06 19:07:08.929987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.021 [2024-12-06 19:07:08.930011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.021 [2024-12-06 19:07:08.930023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.021 [2024-12-06 19:07:08.930035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.021 [2024-12-06 19:07:08.930046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.021 [2024-12-06 19:07:08.930058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.021 [2024-12-06 19:07:08.930078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.021 [2024-12-06 19:07:08.930090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.021 [2024-12-06 19:07:08.930112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.021 [2024-12-06 19:07:08.930125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28c70 is same with the state(6) to be set 00:08:24.021 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.021 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:24.021 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.021 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.021 [2024-12-06 19:07:08.939216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:24.021 [2024-12-06 19:07:08.939255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.939289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:24.021 [2024-12-06 19:07:08.939304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.939319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:24.021 [2024-12-06 19:07:08.939333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.939347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:24.021 [2024-12-06 19:07:08.939361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.939374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d0c60 is same with the state(6) to be set 00:08:24.021 [2024-12-06 19:07:08.939743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.939778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.939807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.939830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.939846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.939861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.939876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.939891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.939906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.939921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.939936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.939950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.939972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.939987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.021 [2024-12-06 19:07:08.940528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.021 [2024-12-06 19:07:08.940543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.940557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.940572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.940585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.940600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.940614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.940629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.940643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.940664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.940679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.940694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.940708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.940736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.940764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.940780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.940794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.940809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.940823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.940838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.940852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.940867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.940882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.940897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.940910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.940926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.940947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.940963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.940977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.940992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 [2024-12-06 19:07:08.941742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.022 [2024-12-06 19:07:08.941757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:24.022 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.022 19:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:24.023 [2024-12-06 19:07:08.942949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:24.023 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:24.023 00:08:24.023 Latency(us) 00:08:24.023 [2024-12-06T18:07:09.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.023 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:24.023 Job: Nvme0n1 ended in about 0.41 seconds with error 00:08:24.023 Verification LBA range: start 0x0 length 0x400 00:08:24.023 Nvme0n1 : 0.41 1544.32 96.52 154.43 0.00 36632.57 2961.26 34952.53 00:08:24.023 [2024-12-06T18:07:09.072Z] =================================================================================================================== 00:08:24.023 [2024-12-06T18:07:09.072Z] Total : 1544.32 96.52 154.43 0.00 36632.57 2961.26 34952.53 00:08:24.023 [2024-12-06 19:07:08.945704] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.023 [2024-12-06 19:07:08.945746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d0c60 (9): Bad file descriptor 00:08:24.023 [2024-12-06 19:07:08.992034] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:24.956 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 113308 00:08:24.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (113308) - No such process 00:08:24.957 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:24.957 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:24.957 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:24.957 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:24.957 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:24.957 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.957 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.957 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.957 { 00:08:24.957 "params": { 00:08:24.957 "name": "Nvme$subsystem", 00:08:24.957 "trtype": "$TEST_TRANSPORT", 00:08:24.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.957 "adrfam": "ipv4", 00:08:24.957 "trsvcid": "$NVMF_PORT", 00:08:24.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.957 "hdgst": ${hdgst:-false}, 00:08:24.957 "ddgst": ${ddgst:-false} 00:08:24.957 }, 00:08:24.957 "method": "bdev_nvme_attach_controller" 00:08:24.957 } 00:08:24.957 EOF 00:08:24.957 )") 00:08:24.957 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:24.957 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:24.957 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:24.957 19:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.957 "params": { 00:08:24.957 "name": "Nvme0", 00:08:24.957 "trtype": "tcp", 00:08:24.957 "traddr": "10.0.0.2", 00:08:24.957 "adrfam": "ipv4", 00:08:24.957 "trsvcid": "4420", 00:08:24.957 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:24.957 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:24.957 "hdgst": false, 00:08:24.957 "ddgst": false 00:08:24.957 }, 00:08:24.957 "method": "bdev_nvme_attach_controller" 00:08:24.957 }' 00:08:24.957 [2024-12-06 19:07:09.992560] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:08:24.957 [2024-12-06 19:07:09.992631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113580 ] 00:08:25.224 [2024-12-06 19:07:10.065784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.224 [2024-12-06 19:07:10.126319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.483 Running I/O for 1 seconds... 00:08:26.679 1564.00 IOPS, 97.75 MiB/s 00:08:26.679 Latency(us) 00:08:26.679 [2024-12-06T18:07:11.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.679 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:26.679 Verification LBA range: start 0x0 length 0x400 00:08:26.679 Nvme0n1 : 1.04 1604.59 100.29 0.00 0.00 39253.86 7330.32 34564.17 00:08:26.679 [2024-12-06T18:07:11.728Z] =================================================================================================================== 00:08:26.679 [2024-12-06T18:07:11.728Z] Total : 1604.59 100.29 0.00 0.00 39253.86 7330.32 34564.17 00:08:26.679 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:26.679 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:26.679 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:26.679 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:26.679 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:26.679 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:26.679 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:26.679 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:26.679 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:26.679 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:26.679 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:26.679 rmmod nvme_tcp 00:08:26.938 rmmod nvme_fabrics 00:08:26.938 rmmod nvme_keyring 00:08:26.938 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:26.939 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:26.939 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:26.939 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 113183 ']' 00:08:26.939 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 113183 00:08:26.939 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 113183 ']' 00:08:26.939 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 113183 00:08:26.939 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:26.939 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.939 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113183 00:08:26.939 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:26.939 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:26.939 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113183' 00:08:26.939 killing process with pid 113183 00:08:26.939 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 113183 00:08:26.939 19:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 113183 00:08:27.200 [2024-12-06 19:07:12.030496] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:27.201 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:27.201 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:27.201 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:27.201 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:27.201 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:27.201 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:27.201 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:27.201 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:27.201 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:27.201 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.201 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.201 19:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.114 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:29.114 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:29.114 00:08:29.114 real 0m9.079s 00:08:29.114 user 0m20.451s 00:08:29.114 sys 0m2.908s 00:08:29.114 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.114 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.114 ************************************ 00:08:29.114 END TEST nvmf_host_management 00:08:29.114 ************************************ 00:08:29.114 19:07:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:29.114 19:07:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:29.114 19:07:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.114 19:07:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.114 ************************************ 00:08:29.114 START TEST nvmf_lvol 00:08:29.114 ************************************ 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:29.374 * Looking for test storage... 00:08:29.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:29.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.374 --rc genhtml_branch_coverage=1 00:08:29.374 --rc genhtml_function_coverage=1 00:08:29.374 --rc genhtml_legend=1 00:08:29.374 --rc geninfo_all_blocks=1 00:08:29.374 --rc geninfo_unexecuted_blocks=1 00:08:29.374 00:08:29.374 ' 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:29.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.374 --rc genhtml_branch_coverage=1 00:08:29.374 --rc genhtml_function_coverage=1 00:08:29.374 --rc genhtml_legend=1 00:08:29.374 --rc geninfo_all_blocks=1 00:08:29.374 --rc geninfo_unexecuted_blocks=1 00:08:29.374 00:08:29.374 ' 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:29.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.374 --rc genhtml_branch_coverage=1 00:08:29.374 --rc genhtml_function_coverage=1 00:08:29.374 --rc genhtml_legend=1 00:08:29.374 --rc geninfo_all_blocks=1 00:08:29.374 --rc geninfo_unexecuted_blocks=1 00:08:29.374 00:08:29.374 ' 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:29.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.374 --rc genhtml_branch_coverage=1 00:08:29.374 --rc genhtml_function_coverage=1 00:08:29.374 --rc genhtml_legend=1 00:08:29.374 --rc geninfo_all_blocks=1 00:08:29.374 --rc geninfo_unexecuted_blocks=1 00:08:29.374 00:08:29.374 ' 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.374 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:29.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:29.375 19:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:31.912 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:31.912 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:31.912 Found net devices under 0000:84:00.0: cvl_0_0 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.912 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:31.913 Found net devices under 0000:84:00.1: cvl_0_1 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:31.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:08:31.913 00:08:31.913 --- 10.0.0.2 ping statistics --- 00:08:31.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.913 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:08:31.913 00:08:31.913 --- 10.0.0.1 ping statistics --- 00:08:31.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.913 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=115734 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 115734 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 115734 ']' 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.913 [2024-12-06 19:07:16.691615] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:08:31.913 [2024-12-06 19:07:16.691692] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.913 [2024-12-06 19:07:16.762573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:31.913 [2024-12-06 19:07:16.815889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.913 [2024-12-06 19:07:16.815950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.913 [2024-12-06 19:07:16.815978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.913 [2024-12-06 19:07:16.815989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.913 [2024-12-06 19:07:16.815998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.913 [2024-12-06 19:07:16.817526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.913 [2024-12-06 19:07:16.817610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.913 [2024-12-06 19:07:16.817605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.913 19:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:32.172 [2024-12-06 19:07:17.195778] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.172 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:32.740 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:32.740 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:32.998 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:32.998 19:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:33.265 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:33.524 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=31f45106-0ecc-4d1b-9fb9-2dde1d134b43 00:08:33.524 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 31f45106-0ecc-4d1b-9fb9-2dde1d134b43 lvol 20 00:08:33.783 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=21d93d27-cb71-4213-a739-5cb8d212eb32 00:08:33.783 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:34.041 19:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 21d93d27-cb71-4213-a739-5cb8d212eb32 00:08:34.299 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:34.558 [2024-12-06 19:07:19.415157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.558 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:34.817 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=116123 00:08:34.817 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:34.817 19:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:35.755 19:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 21d93d27-cb71-4213-a739-5cb8d212eb32 MY_SNAPSHOT 00:08:36.013 19:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d1c6248b-6406-466d-b5af-9505ef34ddb7 00:08:36.013 19:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 21d93d27-cb71-4213-a739-5cb8d212eb32 30 00:08:36.582 19:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d1c6248b-6406-466d-b5af-9505ef34ddb7 MY_CLONE 00:08:36.842 19:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=14ab7953-eceb-486f-ba45-727953cfb8a4 00:08:36.842 19:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 14ab7953-eceb-486f-ba45-727953cfb8a4 00:08:37.410 19:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 116123 00:08:45.525 Initializing NVMe Controllers 00:08:45.525 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:45.525 Controller IO queue size 128, less than required. 00:08:45.525 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:45.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:45.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:45.525 Initialization complete. Launching workers. 00:08:45.525 ======================================================== 00:08:45.525 Latency(us) 00:08:45.525 Device Information : IOPS MiB/s Average min max 00:08:45.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10449.50 40.82 12255.96 542.24 100143.01 00:08:45.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10344.90 40.41 12375.38 2027.86 75541.73 00:08:45.525 ======================================================== 00:08:45.525 Total : 20794.40 81.23 12315.37 542.24 100143.01 00:08:45.525 00:08:45.525 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.525 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 21d93d27-cb71-4213-a739-5cb8d212eb32 00:08:45.784 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 31f45106-0ecc-4d1b-9fb9-2dde1d134b43 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:46.043 rmmod nvme_tcp 00:08:46.043 rmmod nvme_fabrics 00:08:46.043 rmmod nvme_keyring 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 115734 ']' 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 115734 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 115734 ']' 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 115734 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115734 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115734' 00:08:46.043 killing process with pid 115734 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 115734 00:08:46.043 19:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 115734 00:08:46.302 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:46.302 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:46.302 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:46.302 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:46.302 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:46.302 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:46.302 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:46.302 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:46.302 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:46.302 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.302 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.303 19:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:48.851 00:08:48.851 real 0m19.134s 00:08:48.851 user 1m5.223s 00:08:48.851 sys 0m5.735s 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:48.851 ************************************ 00:08:48.851 END TEST nvmf_lvol 00:08:48.851 ************************************ 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:48.851 ************************************ 00:08:48.851 START TEST nvmf_lvs_grow 00:08:48.851 ************************************ 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:48.851 * Looking for test storage... 00:08:48.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:48.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.851 --rc genhtml_branch_coverage=1 00:08:48.851 --rc genhtml_function_coverage=1 00:08:48.851 --rc genhtml_legend=1 00:08:48.851 --rc geninfo_all_blocks=1 00:08:48.851 --rc geninfo_unexecuted_blocks=1 00:08:48.851 00:08:48.851 ' 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:48.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.851 --rc genhtml_branch_coverage=1 00:08:48.851 --rc genhtml_function_coverage=1 00:08:48.851 --rc genhtml_legend=1 00:08:48.851 --rc geninfo_all_blocks=1 00:08:48.851 --rc geninfo_unexecuted_blocks=1 00:08:48.851 00:08:48.851 ' 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:48.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.851 --rc genhtml_branch_coverage=1 00:08:48.851 --rc genhtml_function_coverage=1 00:08:48.851 --rc genhtml_legend=1 00:08:48.851 --rc geninfo_all_blocks=1 00:08:48.851 --rc geninfo_unexecuted_blocks=1 00:08:48.851 00:08:48.851 ' 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:48.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.851 --rc genhtml_branch_coverage=1 00:08:48.851 --rc genhtml_function_coverage=1 00:08:48.851 --rc genhtml_legend=1 00:08:48.851 --rc geninfo_all_blocks=1 00:08:48.851 --rc geninfo_unexecuted_blocks=1 00:08:48.851 00:08:48.851 ' 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.851 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:48.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:48.852 19:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:50.779 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:50.779 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:50.779 Found net devices under 0000:84:00.0: cvl_0_0 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:50.779 Found net devices under 0000:84:00.1: cvl_0_1 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:50.779 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:51.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:08:51.039 00:08:51.039 --- 10.0.0.2 ping statistics --- 00:08:51.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.039 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:08:51.039 00:08:51.039 --- 10.0.0.1 ping statistics --- 00:08:51.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.039 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=119519 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 119519 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 119519 ']' 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.039 19:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.039 [2024-12-06 19:07:35.912235] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:08:51.039 [2024-12-06 19:07:35.912315] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.039 [2024-12-06 19:07:35.983914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.039 [2024-12-06 19:07:36.036597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.039 [2024-12-06 19:07:36.036660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.039 [2024-12-06 19:07:36.036687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.039 [2024-12-06 19:07:36.036697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.039 [2024-12-06 19:07:36.036707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.039 [2024-12-06 19:07:36.037397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.298 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.298 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:51.298 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:51.298 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:51.298 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.298 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.298 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:51.557 [2024-12-06 19:07:36.423676] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.557 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:51.557 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.557 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.557 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.557 ************************************ 00:08:51.557 START TEST lvs_grow_clean 00:08:51.557 ************************************ 00:08:51.557 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:51.557 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:51.557 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:51.557 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:51.557 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:51.557 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:51.557 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:51.557 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:51.557 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:51.557 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:51.815 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:51.816 19:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:52.074 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9fee4b9a-1816-4b6c-941d-6ad743e7052c 00:08:52.074 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9fee4b9a-1816-4b6c-941d-6ad743e7052c 00:08:52.074 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:52.333 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:52.333 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:52.333 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9fee4b9a-1816-4b6c-941d-6ad743e7052c lvol 150 00:08:52.592 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=294183fd-23d1-40a3-9c5d-ce5cd0545f40 00:08:52.592 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:52.592 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:52.850 [2024-12-06 19:07:37.835155] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:52.850 [2024-12-06 19:07:37.835264] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:52.850 true 00:08:52.851 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:52.851 19:07:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9fee4b9a-1816-4b6c-941d-6ad743e7052c 00:08:53.109 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:53.109 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:53.368 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 294183fd-23d1-40a3-9c5d-ce5cd0545f40 00:08:53.625 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:53.884 [2024-12-06 19:07:38.894329] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.884 19:07:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:54.142 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=119907 00:08:54.142 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:54.142 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:54.142 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 119907 /var/tmp/bdevperf.sock 00:08:54.143 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 119907 ']' 00:08:54.143 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:54.143 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.143 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:54.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:54.143 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.143 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:54.400 [2024-12-06 19:07:39.221347] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:08:54.400 [2024-12-06 19:07:39.221437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119907 ] 00:08:54.400 [2024-12-06 19:07:39.289698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.400 [2024-12-06 19:07:39.348510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.657 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.657 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:54.657 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:54.917 Nvme0n1 00:08:54.917 19:07:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:55.176 [ 00:08:55.176 { 00:08:55.176 "name": "Nvme0n1", 00:08:55.176 "aliases": [ 00:08:55.176 "294183fd-23d1-40a3-9c5d-ce5cd0545f40" 00:08:55.176 ], 00:08:55.176 "product_name": "NVMe disk", 00:08:55.176 "block_size": 4096, 00:08:55.176 "num_blocks": 38912, 00:08:55.176 "uuid": "294183fd-23d1-40a3-9c5d-ce5cd0545f40", 00:08:55.176 "numa_id": 1, 00:08:55.176 "assigned_rate_limits": { 00:08:55.176 "rw_ios_per_sec": 0, 00:08:55.176 "rw_mbytes_per_sec": 0, 00:08:55.176 "r_mbytes_per_sec": 0, 00:08:55.176 "w_mbytes_per_sec": 0 00:08:55.176 }, 00:08:55.176 "claimed": false, 00:08:55.176 "zoned": false, 00:08:55.176 "supported_io_types": { 00:08:55.176 "read": true, 00:08:55.176 "write": true, 00:08:55.176 "unmap": true, 00:08:55.176 "flush": true, 00:08:55.176 "reset": true, 00:08:55.176 "nvme_admin": true, 00:08:55.176 "nvme_io": true, 00:08:55.176 "nvme_io_md": false, 00:08:55.176 "write_zeroes": true, 00:08:55.176 "zcopy": false, 00:08:55.176 "get_zone_info": false, 00:08:55.176 "zone_management": false, 00:08:55.176 "zone_append": false, 00:08:55.176 "compare": true, 00:08:55.176 "compare_and_write": true, 00:08:55.176 "abort": true, 00:08:55.176 "seek_hole": false, 00:08:55.176 "seek_data": false, 00:08:55.176 "copy": true, 00:08:55.176 "nvme_iov_md": false 00:08:55.176 }, 00:08:55.176 "memory_domains": [ 00:08:55.176 { 00:08:55.176 "dma_device_id": "system", 00:08:55.177 "dma_device_type": 1 00:08:55.177 } 00:08:55.177 ], 00:08:55.177 "driver_specific": { 00:08:55.177 "nvme": [ 00:08:55.177 { 00:08:55.177 "trid": { 00:08:55.177 "trtype": "TCP", 00:08:55.177 "adrfam": "IPv4", 00:08:55.177 "traddr": "10.0.0.2", 00:08:55.177 "trsvcid": "4420", 00:08:55.177 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:55.177 }, 00:08:55.177 "ctrlr_data": { 00:08:55.177 "cntlid": 1, 00:08:55.177 "vendor_id": "0x8086", 00:08:55.177 "model_number": "SPDK bdev Controller", 00:08:55.177 "serial_number": "SPDK0", 00:08:55.177 "firmware_revision": "25.01", 00:08:55.177 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:55.177 "oacs": { 00:08:55.177 "security": 0, 00:08:55.177 "format": 0, 00:08:55.177 "firmware": 0, 00:08:55.177 "ns_manage": 0 00:08:55.177 }, 00:08:55.177 "multi_ctrlr": true, 00:08:55.177 "ana_reporting": false 00:08:55.177 }, 00:08:55.177 "vs": { 00:08:55.177 "nvme_version": "1.3" 00:08:55.177 }, 00:08:55.177 "ns_data": { 00:08:55.177 "id": 1, 00:08:55.177 "can_share": true 00:08:55.177 } 00:08:55.177 } 00:08:55.177 ], 00:08:55.177 "mp_policy": "active_passive" 00:08:55.177 } 00:08:55.177 } 00:08:55.177 ] 00:08:55.177 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=119996 00:08:55.177 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:55.177 19:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:55.177 Running I/O for 10 seconds... 00:08:56.554 Latency(us) 00:08:56.554 [2024-12-06T18:07:41.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.554 Nvme0n1 : 1.00 16130.00 63.01 0.00 0.00 0.00 0.00 0.00 00:08:56.554 [2024-12-06T18:07:41.603Z] =================================================================================================================== 00:08:56.554 [2024-12-06T18:07:41.603Z] Total : 16130.00 63.01 0.00 0.00 0.00 0.00 0.00 00:08:56.554 00:08:57.125 19:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9fee4b9a-1816-4b6c-941d-6ad743e7052c 00:08:57.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.383 Nvme0n1 : 2.00 16465.50 64.32 0.00 0.00 0.00 0.00 0.00 00:08:57.383 [2024-12-06T18:07:42.432Z] =================================================================================================================== 00:08:57.383 [2024-12-06T18:07:42.432Z] Total : 16465.50 64.32 0.00 0.00 0.00 0.00 0.00 00:08:57.383 00:08:57.383 true 00:08:57.383 19:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9fee4b9a-1816-4b6c-941d-6ad743e7052c 00:08:57.384 19:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:57.949 19:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:57.949 19:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:57.949 19:07:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 119996 00:08:58.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.206 Nvme0n1 : 3.00 16630.00 64.96 0.00 0.00 0.00 0.00 0.00 00:08:58.206 [2024-12-06T18:07:43.255Z] =================================================================================================================== 00:08:58.206 [2024-12-06T18:07:43.255Z] Total : 16630.00 64.96 0.00 0.00 0.00 0.00 0.00 00:08:58.206 00:08:59.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.581 Nvme0n1 : 4.00 16744.00 65.41 0.00 0.00 0.00 0.00 0.00 00:08:59.581 [2024-12-06T18:07:44.630Z] =================================================================================================================== 00:08:59.581 [2024-12-06T18:07:44.630Z] Total : 16744.00 65.41 0.00 0.00 0.00 0.00 0.00 00:08:59.581 00:09:00.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.525 Nvme0n1 : 5.00 16839.20 65.78 0.00 0.00 0.00 0.00 0.00 00:09:00.525 [2024-12-06T18:07:45.574Z] =================================================================================================================== 00:09:00.525 [2024-12-06T18:07:45.574Z] Total : 16839.20 65.78 0.00 0.00 0.00 0.00 0.00 00:09:00.525 00:09:01.460 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.460 Nvme0n1 : 6.00 16912.83 66.07 0.00 0.00 0.00 0.00 0.00 00:09:01.460 [2024-12-06T18:07:46.509Z] =================================================================================================================== 00:09:01.460 [2024-12-06T18:07:46.509Z] Total : 16912.83 66.07 0.00 0.00 0.00 0.00 0.00 00:09:01.460 00:09:02.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.396 Nvme0n1 : 7.00 16966.00 66.27 0.00 0.00 0.00 0.00 0.00 00:09:02.396 [2024-12-06T18:07:47.445Z] =================================================================================================================== 00:09:02.396 [2024-12-06T18:07:47.446Z] Total : 16966.00 66.27 0.00 0.00 0.00 0.00 0.00 00:09:02.397 00:09:03.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.350 Nvme0n1 : 8.00 17005.75 66.43 0.00 0.00 0.00 0.00 0.00 00:09:03.350 [2024-12-06T18:07:48.399Z] =================================================================================================================== 00:09:03.350 [2024-12-06T18:07:48.399Z] Total : 17005.75 66.43 0.00 0.00 0.00 0.00 0.00 00:09:03.350 00:09:04.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.286 Nvme0n1 : 9.00 17050.44 66.60 0.00 0.00 0.00 0.00 0.00 00:09:04.286 [2024-12-06T18:07:49.335Z] =================================================================================================================== 00:09:04.286 [2024-12-06T18:07:49.335Z] Total : 17050.44 66.60 0.00 0.00 0.00 0.00 0.00 00:09:04.286 00:09:05.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.221 Nvme0n1 : 10.00 17059.90 66.64 0.00 0.00 0.00 0.00 0.00 00:09:05.221 [2024-12-06T18:07:50.270Z] =================================================================================================================== 00:09:05.221 [2024-12-06T18:07:50.270Z] Total : 17059.90 66.64 0.00 0.00 0.00 0.00 0.00 00:09:05.221 00:09:05.221 00:09:05.221 Latency(us) 00:09:05.221 [2024-12-06T18:07:50.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.221 Nvme0n1 : 10.01 17063.38 66.65 0.00 0.00 7497.25 3932.16 15437.37 00:09:05.221 [2024-12-06T18:07:50.270Z] =================================================================================================================== 00:09:05.221 [2024-12-06T18:07:50.270Z] Total : 17063.38 66.65 0.00 0.00 7497.25 3932.16 15437.37 00:09:05.221 { 00:09:05.221 "results": [ 00:09:05.221 { 00:09:05.221 "job": "Nvme0n1", 00:09:05.221 "core_mask": "0x2", 00:09:05.221 "workload": "randwrite", 00:09:05.221 "status": "finished", 00:09:05.221 "queue_depth": 128, 00:09:05.221 "io_size": 4096, 00:09:05.221 "runtime": 10.005462, 00:09:05.221 "iops": 17063.379981853912, 00:09:05.221 "mibps": 66.65382805411684, 00:09:05.221 "io_failed": 0, 00:09:05.221 "io_timeout": 0, 00:09:05.221 "avg_latency_us": 7497.249338547636, 00:09:05.221 "min_latency_us": 3932.16, 00:09:05.221 "max_latency_us": 15437.368888888888 00:09:05.221 } 00:09:05.221 ], 00:09:05.221 "core_count": 1 00:09:05.221 } 00:09:05.221 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 119907 00:09:05.221 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 119907 ']' 00:09:05.221 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 119907 00:09:05.221 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:05.221 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.221 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119907 00:09:05.479 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:05.479 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:05.479 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119907' 00:09:05.479 killing process with pid 119907 00:09:05.479 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 119907 00:09:05.479 Received shutdown signal, test time was about 10.000000 seconds 00:09:05.479 00:09:05.479 Latency(us) 00:09:05.479 [2024-12-06T18:07:50.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.479 [2024-12-06T18:07:50.528Z] =================================================================================================================== 00:09:05.479 [2024-12-06T18:07:50.528Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:05.479 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 119907 00:09:05.479 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:06.045 19:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:06.045 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9fee4b9a-1816-4b6c-941d-6ad743e7052c 00:09:06.045 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:06.304 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:06.304 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:06.304 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:06.562 [2024-12-06 19:07:51.579571] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:06.821 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9fee4b9a-1816-4b6c-941d-6ad743e7052c 00:09:06.821 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:06.821 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9fee4b9a-1816-4b6c-941d-6ad743e7052c 00:09:06.821 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:06.821 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:06.821 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:06.821 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:06.821 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:06.821 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:06.821 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:06.821 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:06.821 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9fee4b9a-1816-4b6c-941d-6ad743e7052c 00:09:07.080 request: 00:09:07.080 { 00:09:07.080 "uuid": "9fee4b9a-1816-4b6c-941d-6ad743e7052c", 00:09:07.080 "method": "bdev_lvol_get_lvstores", 00:09:07.080 "req_id": 1 00:09:07.080 } 00:09:07.080 Got JSON-RPC error response 00:09:07.080 response: 00:09:07.080 { 00:09:07.080 "code": -19, 00:09:07.080 "message": "No such device" 00:09:07.080 } 00:09:07.080 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:07.080 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:07.080 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:07.080 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:07.080 19:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:07.338 aio_bdev 00:09:07.338 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 294183fd-23d1-40a3-9c5d-ce5cd0545f40 00:09:07.338 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=294183fd-23d1-40a3-9c5d-ce5cd0545f40 00:09:07.338 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.338 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:07.338 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.338 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.338 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:07.596 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 294183fd-23d1-40a3-9c5d-ce5cd0545f40 -t 2000 00:09:07.855 [ 00:09:07.855 { 00:09:07.855 "name": "294183fd-23d1-40a3-9c5d-ce5cd0545f40", 00:09:07.855 "aliases": [ 00:09:07.855 "lvs/lvol" 00:09:07.855 ], 00:09:07.855 "product_name": "Logical Volume", 00:09:07.855 "block_size": 4096, 00:09:07.855 "num_blocks": 38912, 00:09:07.855 "uuid": "294183fd-23d1-40a3-9c5d-ce5cd0545f40", 00:09:07.855 "assigned_rate_limits": { 00:09:07.855 "rw_ios_per_sec": 0, 00:09:07.855 "rw_mbytes_per_sec": 0, 00:09:07.855 "r_mbytes_per_sec": 0, 00:09:07.855 "w_mbytes_per_sec": 0 00:09:07.855 }, 00:09:07.855 "claimed": false, 00:09:07.855 "zoned": false, 00:09:07.855 "supported_io_types": { 00:09:07.855 "read": true, 00:09:07.855 "write": true, 00:09:07.855 "unmap": true, 00:09:07.855 "flush": false, 00:09:07.855 "reset": true, 00:09:07.855 "nvme_admin": false, 00:09:07.855 "nvme_io": false, 00:09:07.855 "nvme_io_md": false, 00:09:07.855 "write_zeroes": true, 00:09:07.855 "zcopy": false, 00:09:07.855 "get_zone_info": false, 00:09:07.855 "zone_management": false, 00:09:07.855 "zone_append": false, 00:09:07.855 "compare": false, 00:09:07.855 "compare_and_write": false, 00:09:07.855 "abort": false, 00:09:07.855 "seek_hole": true, 00:09:07.855 "seek_data": true, 00:09:07.855 "copy": false, 00:09:07.855 "nvme_iov_md": false 00:09:07.855 }, 00:09:07.855 "driver_specific": { 00:09:07.855 "lvol": { 00:09:07.855 "lvol_store_uuid": "9fee4b9a-1816-4b6c-941d-6ad743e7052c", 00:09:07.855 "base_bdev": "aio_bdev", 00:09:07.855 "thin_provision": false, 00:09:07.855 "num_allocated_clusters": 38, 00:09:07.855 "snapshot": false, 00:09:07.855 "clone": false, 00:09:07.855 "esnap_clone": false 00:09:07.855 } 00:09:07.855 } 00:09:07.855 } 00:09:07.855 ] 00:09:07.855 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:07.855 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9fee4b9a-1816-4b6c-941d-6ad743e7052c 00:09:07.855 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:08.113 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:08.113 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9fee4b9a-1816-4b6c-941d-6ad743e7052c 00:09:08.113 19:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:08.371 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:08.371 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 294183fd-23d1-40a3-9c5d-ce5cd0545f40 00:09:08.628 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9fee4b9a-1816-4b6c-941d-6ad743e7052c 00:09:08.886 19:07:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:09.143 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.143 00:09:09.143 real 0m17.623s 00:09:09.143 user 0m17.241s 00:09:09.143 sys 0m1.831s 00:09:09.143 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.143 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:09.143 ************************************ 00:09:09.143 END TEST lvs_grow_clean 00:09:09.143 ************************************ 00:09:09.143 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:09.143 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.143 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.143 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.143 ************************************ 00:09:09.143 START TEST lvs_grow_dirty 00:09:09.143 ************************************ 00:09:09.143 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:09.143 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:09.143 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:09.143 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:09.143 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:09.143 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:09.143 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:09.143 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.143 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.143 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:09.400 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:09.400 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:09.658 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d735494c-1667-4e85-bbee-98932de06c01 00:09:09.658 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d735494c-1667-4e85-bbee-98932de06c01 00:09:09.658 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:10.224 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:10.224 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:10.224 19:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d735494c-1667-4e85-bbee-98932de06c01 lvol 150 00:09:10.224 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1cad8880-e83a-418d-8a81-8479d1826b46 00:09:10.224 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.224 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:10.481 [2024-12-06 19:07:55.500193] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:10.481 [2024-12-06 19:07:55.500305] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:10.481 true 00:09:10.481 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d735494c-1667-4e85-bbee-98932de06c01 00:09:10.481 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:10.739 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:10.739 19:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:11.303 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1cad8880-e83a-418d-8a81-8479d1826b46 00:09:11.303 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:11.560 [2024-12-06 19:07:56.579481] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.560 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:11.816 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=122045 00:09:11.816 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:11.816 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:11.816 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 122045 /var/tmp/bdevperf.sock 00:09:11.816 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 122045 ']' 00:09:11.816 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:11.816 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.816 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:11.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:11.816 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.816 19:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:12.074 [2024-12-06 19:07:56.909217] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:09:12.074 [2024-12-06 19:07:56.909287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122045 ] 00:09:12.074 [2024-12-06 19:07:56.977241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.074 [2024-12-06 19:07:57.038230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.331 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.331 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:12.331 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:12.588 Nvme0n1 00:09:12.846 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:12.846 [ 00:09:12.846 { 00:09:12.846 "name": "Nvme0n1", 00:09:12.846 "aliases": [ 00:09:12.846 "1cad8880-e83a-418d-8a81-8479d1826b46" 00:09:12.846 ], 00:09:12.846 "product_name": "NVMe disk", 00:09:12.846 "block_size": 4096, 00:09:12.846 "num_blocks": 38912, 00:09:12.846 "uuid": "1cad8880-e83a-418d-8a81-8479d1826b46", 00:09:12.846 "numa_id": 1, 00:09:12.846 "assigned_rate_limits": { 00:09:12.846 "rw_ios_per_sec": 0, 00:09:12.846 "rw_mbytes_per_sec": 0, 00:09:12.846 "r_mbytes_per_sec": 0, 00:09:12.846 "w_mbytes_per_sec": 0 00:09:12.846 }, 00:09:12.846 "claimed": false, 00:09:12.846 "zoned": false, 00:09:12.846 "supported_io_types": { 00:09:12.846 "read": true, 00:09:12.846 "write": true, 00:09:12.846 "unmap": true, 00:09:12.846 "flush": true, 00:09:12.846 "reset": true, 00:09:12.846 "nvme_admin": true, 00:09:12.846 "nvme_io": true, 00:09:12.846 "nvme_io_md": false, 00:09:12.846 "write_zeroes": true, 00:09:12.846 "zcopy": false, 00:09:12.846 "get_zone_info": false, 00:09:12.846 "zone_management": false, 00:09:12.846 "zone_append": false, 00:09:12.846 "compare": true, 00:09:12.846 "compare_and_write": true, 00:09:12.846 "abort": true, 00:09:12.846 "seek_hole": false, 00:09:12.846 "seek_data": false, 00:09:12.846 "copy": true, 00:09:12.846 "nvme_iov_md": false 00:09:12.846 }, 00:09:12.846 "memory_domains": [ 00:09:12.846 { 00:09:12.846 "dma_device_id": "system", 00:09:12.846 "dma_device_type": 1 00:09:12.846 } 00:09:12.846 ], 00:09:12.846 "driver_specific": { 00:09:12.846 "nvme": [ 00:09:12.846 { 00:09:12.846 "trid": { 00:09:12.846 "trtype": "TCP", 00:09:12.846 "adrfam": "IPv4", 00:09:12.846 "traddr": "10.0.0.2", 00:09:12.846 "trsvcid": "4420", 00:09:12.846 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:12.846 }, 00:09:12.846 "ctrlr_data": { 00:09:12.846 "cntlid": 1, 00:09:12.846 "vendor_id": "0x8086", 00:09:12.846 "model_number": "SPDK bdev Controller", 00:09:12.846 "serial_number": "SPDK0", 00:09:12.846 "firmware_revision": "25.01", 00:09:12.846 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:12.846 "oacs": { 00:09:12.846 "security": 0, 00:09:12.846 "format": 0, 00:09:12.846 "firmware": 0, 00:09:12.846 "ns_manage": 0 00:09:12.846 }, 00:09:12.846 "multi_ctrlr": true, 00:09:12.846 "ana_reporting": false 00:09:12.846 }, 00:09:12.846 "vs": { 00:09:12.846 "nvme_version": "1.3" 00:09:12.846 }, 00:09:12.846 "ns_data": { 00:09:12.846 "id": 1, 00:09:12.846 "can_share": true 00:09:12.846 } 00:09:12.846 } 00:09:12.846 ], 00:09:12.846 "mp_policy": "active_passive" 00:09:12.846 } 00:09:12.846 } 00:09:12.846 ] 00:09:13.105 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=122183 00:09:13.105 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:13.105 19:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:13.105 Running I/O for 10 seconds... 00:09:14.041 Latency(us) 00:09:14.041 [2024-12-06T18:07:59.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.041 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.041 Nvme0n1 : 1.00 16523.00 64.54 0.00 0.00 0.00 0.00 0.00 00:09:14.041 [2024-12-06T18:07:59.090Z] =================================================================================================================== 00:09:14.041 [2024-12-06T18:07:59.090Z] Total : 16523.00 64.54 0.00 0.00 0.00 0.00 0.00 00:09:14.041 00:09:14.977 19:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d735494c-1667-4e85-bbee-98932de06c01 00:09:14.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.977 Nvme0n1 : 2.00 16744.00 65.41 0.00 0.00 0.00 0.00 0.00 00:09:14.977 [2024-12-06T18:08:00.026Z] =================================================================================================================== 00:09:14.977 [2024-12-06T18:08:00.026Z] Total : 16744.00 65.41 0.00 0.00 0.00 0.00 0.00 00:09:14.977 00:09:15.236 true 00:09:15.236 19:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d735494c-1667-4e85-bbee-98932de06c01 00:09:15.236 19:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:15.495 19:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:15.495 19:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:15.754 19:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 122183 00:09:16.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.012 Nvme0n1 : 3.00 16612.00 64.89 0.00 0.00 0.00 0.00 0.00 00:09:16.012 [2024-12-06T18:08:01.061Z] =================================================================================================================== 00:09:16.012 [2024-12-06T18:08:01.061Z] Total : 16612.00 64.89 0.00 0.00 0.00 0.00 0.00 00:09:16.012 00:09:17.386 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.386 Nvme0n1 : 4.00 16766.50 65.49 0.00 0.00 0.00 0.00 0.00 00:09:17.386 [2024-12-06T18:08:02.435Z] =================================================================================================================== 00:09:17.386 [2024-12-06T18:08:02.435Z] Total : 16766.50 65.49 0.00 0.00 0.00 0.00 0.00 00:09:17.386 00:09:18.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.322 Nvme0n1 : 5.00 16881.80 65.94 0.00 0.00 0.00 0.00 0.00 00:09:18.322 [2024-12-06T18:08:03.371Z] =================================================================================================================== 00:09:18.322 [2024-12-06T18:08:03.371Z] Total : 16881.80 65.94 0.00 0.00 0.00 0.00 0.00 00:09:18.322 00:09:19.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.255 Nvme0n1 : 6.00 16972.50 66.30 0.00 0.00 0.00 0.00 0.00 00:09:19.255 [2024-12-06T18:08:04.304Z] =================================================================================================================== 00:09:19.255 [2024-12-06T18:08:04.304Z] Total : 16972.50 66.30 0.00 0.00 0.00 0.00 0.00 00:09:19.255 00:09:20.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.194 Nvme0n1 : 7.00 17024.57 66.50 0.00 0.00 0.00 0.00 0.00 00:09:20.194 [2024-12-06T18:08:05.243Z] =================================================================================================================== 00:09:20.194 [2024-12-06T18:08:05.243Z] Total : 17024.57 66.50 0.00 0.00 0.00 0.00 0.00 00:09:20.194 00:09:21.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.132 Nvme0n1 : 8.00 17064.75 66.66 0.00 0.00 0.00 0.00 0.00 00:09:21.132 [2024-12-06T18:08:06.181Z] =================================================================================================================== 00:09:21.132 [2024-12-06T18:08:06.181Z] Total : 17064.75 66.66 0.00 0.00 0.00 0.00 0.00 00:09:21.132 00:09:22.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.067 Nvme0n1 : 9.00 17090.11 66.76 0.00 0.00 0.00 0.00 0.00 00:09:22.067 [2024-12-06T18:08:07.116Z] =================================================================================================================== 00:09:22.067 [2024-12-06T18:08:07.116Z] Total : 17090.11 66.76 0.00 0.00 0.00 0.00 0.00 00:09:22.067 00:09:23.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.001 Nvme0n1 : 10.00 17118.10 66.87 0.00 0.00 0.00 0.00 0.00 00:09:23.001 [2024-12-06T18:08:08.050Z] =================================================================================================================== 00:09:23.001 [2024-12-06T18:08:08.050Z] Total : 17118.10 66.87 0.00 0.00 0.00 0.00 0.00 00:09:23.001 00:09:23.001 00:09:23.001 Latency(us) 00:09:23.001 [2024-12-06T18:08:08.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.001 Nvme0n1 : 10.01 17119.07 66.87 0.00 0.00 7473.03 2026.76 14854.83 00:09:23.001 [2024-12-06T18:08:08.050Z] =================================================================================================================== 00:09:23.001 [2024-12-06T18:08:08.050Z] Total : 17119.07 66.87 0.00 0.00 7473.03 2026.76 14854.83 00:09:23.001 { 00:09:23.001 "results": [ 00:09:23.001 { 00:09:23.001 "job": "Nvme0n1", 00:09:23.001 "core_mask": "0x2", 00:09:23.001 "workload": "randwrite", 00:09:23.001 "status": "finished", 00:09:23.001 "queue_depth": 128, 00:09:23.001 "io_size": 4096, 00:09:23.001 "runtime": 10.006911, 00:09:23.001 "iops": 17119.069011406216, 00:09:23.001 "mibps": 66.87136332580553, 00:09:23.001 "io_failed": 0, 00:09:23.001 "io_timeout": 0, 00:09:23.001 "avg_latency_us": 7473.027965934634, 00:09:23.001 "min_latency_us": 2026.7614814814815, 00:09:23.001 "max_latency_us": 14854.826666666666 00:09:23.001 } 00:09:23.001 ], 00:09:23.001 "core_count": 1 00:09:23.001 } 00:09:23.259 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 122045 00:09:23.259 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 122045 ']' 00:09:23.259 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 122045 00:09:23.259 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:23.259 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.259 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122045 00:09:23.259 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:23.259 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:23.259 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122045' 00:09:23.259 killing process with pid 122045 00:09:23.259 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 122045 00:09:23.259 Received shutdown signal, test time was about 10.000000 seconds 00:09:23.259 00:09:23.259 Latency(us) 00:09:23.259 [2024-12-06T18:08:08.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.259 [2024-12-06T18:08:08.308Z] =================================================================================================================== 00:09:23.259 [2024-12-06T18:08:08.308Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:23.259 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 122045 00:09:23.516 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:23.774 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:24.031 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d735494c-1667-4e85-bbee-98932de06c01 00:09:24.031 19:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 119519 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 119519 00:09:24.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 119519 Killed "${NVMF_APP[@]}" "$@" 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=123524 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 123524 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 123524 ']' 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.289 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:24.289 [2024-12-06 19:08:09.231151] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:09:24.289 [2024-12-06 19:08:09.231256] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.289 [2024-12-06 19:08:09.304513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.547 [2024-12-06 19:08:09.364136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.547 [2024-12-06 19:08:09.364198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.547 [2024-12-06 19:08:09.364225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.547 [2024-12-06 19:08:09.364236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.547 [2024-12-06 19:08:09.364246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.547 [2024-12-06 19:08:09.364940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.547 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.547 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:24.547 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:24.547 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:24.547 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:24.547 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.547 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.806 [2024-12-06 19:08:09.769053] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:24.806 [2024-12-06 19:08:09.769209] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:24.806 [2024-12-06 19:08:09.769256] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:24.806 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:24.806 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1cad8880-e83a-418d-8a81-8479d1826b46 00:09:24.806 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1cad8880-e83a-418d-8a81-8479d1826b46 00:09:24.806 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.806 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:24.806 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.806 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.806 19:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:25.064 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1cad8880-e83a-418d-8a81-8479d1826b46 -t 2000 00:09:25.320 [ 00:09:25.321 { 00:09:25.321 "name": "1cad8880-e83a-418d-8a81-8479d1826b46", 00:09:25.321 "aliases": [ 00:09:25.321 "lvs/lvol" 00:09:25.321 ], 00:09:25.321 "product_name": "Logical Volume", 00:09:25.321 "block_size": 4096, 00:09:25.321 "num_blocks": 38912, 00:09:25.321 "uuid": "1cad8880-e83a-418d-8a81-8479d1826b46", 00:09:25.321 "assigned_rate_limits": { 00:09:25.321 "rw_ios_per_sec": 0, 00:09:25.321 "rw_mbytes_per_sec": 0, 00:09:25.321 "r_mbytes_per_sec": 0, 00:09:25.321 "w_mbytes_per_sec": 0 00:09:25.321 }, 00:09:25.321 "claimed": false, 00:09:25.321 "zoned": false, 00:09:25.321 "supported_io_types": { 00:09:25.321 "read": true, 00:09:25.321 "write": true, 00:09:25.321 "unmap": true, 00:09:25.321 "flush": false, 00:09:25.321 "reset": true, 00:09:25.321 "nvme_admin": false, 00:09:25.321 "nvme_io": false, 00:09:25.321 "nvme_io_md": false, 00:09:25.321 "write_zeroes": true, 00:09:25.321 "zcopy": false, 00:09:25.321 "get_zone_info": false, 00:09:25.321 "zone_management": false, 00:09:25.321 "zone_append": false, 00:09:25.321 "compare": false, 00:09:25.321 "compare_and_write": false, 00:09:25.321 "abort": false, 00:09:25.321 "seek_hole": true, 00:09:25.321 "seek_data": true, 00:09:25.321 "copy": false, 00:09:25.321 "nvme_iov_md": false 00:09:25.321 }, 00:09:25.321 "driver_specific": { 00:09:25.321 "lvol": { 00:09:25.321 "lvol_store_uuid": "d735494c-1667-4e85-bbee-98932de06c01", 00:09:25.321 "base_bdev": "aio_bdev", 00:09:25.321 "thin_provision": false, 00:09:25.321 "num_allocated_clusters": 38, 00:09:25.321 "snapshot": false, 00:09:25.321 "clone": false, 00:09:25.321 "esnap_clone": false 00:09:25.321 } 00:09:25.321 } 00:09:25.321 } 00:09:25.321 ] 00:09:25.321 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:25.321 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d735494c-1667-4e85-bbee-98932de06c01 00:09:25.321 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:25.578 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:25.578 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d735494c-1667-4e85-bbee-98932de06c01 00:09:25.578 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:26.142 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:26.142 19:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:26.142 [2024-12-06 19:08:11.142658] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:26.142 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d735494c-1667-4e85-bbee-98932de06c01 00:09:26.142 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:26.142 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d735494c-1667-4e85-bbee-98932de06c01 00:09:26.142 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.142 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.142 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.142 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.142 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.142 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.142 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.142 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:26.142 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d735494c-1667-4e85-bbee-98932de06c01 00:09:26.400 request: 00:09:26.400 { 00:09:26.400 "uuid": "d735494c-1667-4e85-bbee-98932de06c01", 00:09:26.400 "method": "bdev_lvol_get_lvstores", 00:09:26.400 "req_id": 1 00:09:26.400 } 00:09:26.400 Got JSON-RPC error response 00:09:26.400 response: 00:09:26.400 { 00:09:26.400 "code": -19, 00:09:26.400 "message": "No such device" 00:09:26.400 } 00:09:26.400 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:26.400 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:26.400 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:26.400 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:26.400 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:26.659 aio_bdev 00:09:26.659 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1cad8880-e83a-418d-8a81-8479d1826b46 00:09:26.659 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1cad8880-e83a-418d-8a81-8479d1826b46 00:09:26.659 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.659 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:26.659 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.659 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.659 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:27.224 19:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1cad8880-e83a-418d-8a81-8479d1826b46 -t 2000 00:09:27.224 [ 00:09:27.224 { 00:09:27.224 "name": "1cad8880-e83a-418d-8a81-8479d1826b46", 00:09:27.224 "aliases": [ 00:09:27.224 "lvs/lvol" 00:09:27.224 ], 00:09:27.224 "product_name": "Logical Volume", 00:09:27.224 "block_size": 4096, 00:09:27.224 "num_blocks": 38912, 00:09:27.224 "uuid": "1cad8880-e83a-418d-8a81-8479d1826b46", 00:09:27.224 "assigned_rate_limits": { 00:09:27.224 "rw_ios_per_sec": 0, 00:09:27.224 "rw_mbytes_per_sec": 0, 00:09:27.224 "r_mbytes_per_sec": 0, 00:09:27.224 "w_mbytes_per_sec": 0 00:09:27.224 }, 00:09:27.224 "claimed": false, 00:09:27.224 "zoned": false, 00:09:27.224 "supported_io_types": { 00:09:27.224 "read": true, 00:09:27.224 "write": true, 00:09:27.224 "unmap": true, 00:09:27.224 "flush": false, 00:09:27.224 "reset": true, 00:09:27.224 "nvme_admin": false, 00:09:27.224 "nvme_io": false, 00:09:27.224 "nvme_io_md": false, 00:09:27.224 "write_zeroes": true, 00:09:27.224 "zcopy": false, 00:09:27.224 "get_zone_info": false, 00:09:27.224 "zone_management": false, 00:09:27.224 "zone_append": false, 00:09:27.224 "compare": false, 00:09:27.224 "compare_and_write": false, 00:09:27.224 "abort": false, 00:09:27.224 "seek_hole": true, 00:09:27.224 "seek_data": true, 00:09:27.224 "copy": false, 00:09:27.224 "nvme_iov_md": false 00:09:27.224 }, 00:09:27.224 "driver_specific": { 00:09:27.224 "lvol": { 00:09:27.224 "lvol_store_uuid": "d735494c-1667-4e85-bbee-98932de06c01", 00:09:27.224 "base_bdev": "aio_bdev", 00:09:27.224 "thin_provision": false, 00:09:27.224 "num_allocated_clusters": 38, 00:09:27.224 "snapshot": false, 00:09:27.224 "clone": false, 00:09:27.224 "esnap_clone": false 00:09:27.224 } 00:09:27.224 } 00:09:27.224 } 00:09:27.224 ] 00:09:27.224 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:27.224 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d735494c-1667-4e85-bbee-98932de06c01 00:09:27.224 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:27.483 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:27.483 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d735494c-1667-4e85-bbee-98932de06c01 00:09:27.483 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:28.050 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:28.050 19:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1cad8880-e83a-418d-8a81-8479d1826b46 00:09:28.050 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d735494c-1667-4e85-bbee-98932de06c01 00:09:28.617 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:28.617 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:28.617 00:09:28.617 real 0m19.499s 00:09:28.617 user 0m48.968s 00:09:28.617 sys 0m4.970s 00:09:28.617 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.617 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:28.617 ************************************ 00:09:28.617 END TEST lvs_grow_dirty 00:09:28.617 ************************************ 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:28.876 nvmf_trace.0 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:28.876 rmmod nvme_tcp 00:09:28.876 rmmod nvme_fabrics 00:09:28.876 rmmod nvme_keyring 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 123524 ']' 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 123524 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 123524 ']' 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 123524 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 123524 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 123524' 00:09:28.876 killing process with pid 123524 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 123524 00:09:28.876 19:08:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 123524 00:09:29.136 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:29.136 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:29.136 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:29.136 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:29.136 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:29.137 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:29.137 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:29.137 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:29.137 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:29.137 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.137 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.137 19:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.065 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:31.065 00:09:31.065 real 0m42.696s 00:09:31.065 user 1m12.285s 00:09:31.065 sys 0m8.845s 00:09:31.065 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.065 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:31.065 ************************************ 00:09:31.065 END TEST nvmf_lvs_grow 00:09:31.065 ************************************ 00:09:31.065 19:08:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:31.065 19:08:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:31.065 19:08:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.065 19:08:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:31.065 ************************************ 00:09:31.065 START TEST nvmf_bdev_io_wait 00:09:31.065 ************************************ 00:09:31.065 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:31.325 * Looking for test storage... 00:09:31.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:31.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.325 --rc genhtml_branch_coverage=1 00:09:31.325 --rc genhtml_function_coverage=1 00:09:31.325 --rc genhtml_legend=1 00:09:31.325 --rc geninfo_all_blocks=1 00:09:31.325 --rc geninfo_unexecuted_blocks=1 00:09:31.325 00:09:31.325 ' 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:31.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.325 --rc genhtml_branch_coverage=1 00:09:31.325 --rc genhtml_function_coverage=1 00:09:31.325 --rc genhtml_legend=1 00:09:31.325 --rc geninfo_all_blocks=1 00:09:31.325 --rc geninfo_unexecuted_blocks=1 00:09:31.325 00:09:31.325 ' 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:31.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.325 --rc genhtml_branch_coverage=1 00:09:31.325 --rc genhtml_function_coverage=1 00:09:31.325 --rc genhtml_legend=1 00:09:31.325 --rc geninfo_all_blocks=1 00:09:31.325 --rc geninfo_unexecuted_blocks=1 00:09:31.325 00:09:31.325 ' 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:31.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.325 --rc genhtml_branch_coverage=1 00:09:31.325 --rc genhtml_function_coverage=1 00:09:31.325 --rc genhtml_legend=1 00:09:31.325 --rc geninfo_all_blocks=1 00:09:31.325 --rc geninfo_unexecuted_blocks=1 00:09:31.325 00:09:31.325 ' 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.325 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:31.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:31.326 19:08:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:33.867 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.867 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:33.867 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:33.868 Found net devices under 0000:84:00.0: cvl_0_0 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:33.868 Found net devices under 0000:84:00.1: cvl_0_1 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:09:33.868 00:09:33.868 --- 10.0.0.2 ping statistics --- 00:09:33.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.868 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:09:33.868 00:09:33.868 --- 10.0.0.1 ping statistics --- 00:09:33.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.868 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=126112 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 126112 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 126112 ']' 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.868 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.868 [2024-12-06 19:08:18.743532] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:09:33.868 [2024-12-06 19:08:18.743603] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.868 [2024-12-06 19:08:18.822712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.868 [2024-12-06 19:08:18.883624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.868 [2024-12-06 19:08:18.883688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.868 [2024-12-06 19:08:18.883716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.868 [2024-12-06 19:08:18.883736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.868 [2024-12-06 19:08:18.883746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.868 [2024-12-06 19:08:18.885382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.868 [2024-12-06 19:08:18.885407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.868 [2024-12-06 19:08:18.885467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.868 [2024-12-06 19:08:18.885470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.127 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.127 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:34.127 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:34.127 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:34.128 19:08:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.128 [2024-12-06 19:08:19.091782] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.128 Malloc0 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.128 [2024-12-06 19:08:19.145197] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=126240 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=126242 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=126244 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.128 { 00:09:34.128 "params": { 00:09:34.128 "name": "Nvme$subsystem", 00:09:34.128 "trtype": "$TEST_TRANSPORT", 00:09:34.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.128 "adrfam": "ipv4", 00:09:34.128 "trsvcid": "$NVMF_PORT", 00:09:34.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.128 "hdgst": ${hdgst:-false}, 00:09:34.128 "ddgst": ${ddgst:-false} 00:09:34.128 }, 00:09:34.128 "method": "bdev_nvme_attach_controller" 00:09:34.128 } 00:09:34.128 EOF 00:09:34.128 )") 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=126246 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.128 { 00:09:34.128 "params": { 00:09:34.128 "name": "Nvme$subsystem", 00:09:34.128 "trtype": "$TEST_TRANSPORT", 00:09:34.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.128 "adrfam": "ipv4", 00:09:34.128 "trsvcid": "$NVMF_PORT", 00:09:34.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.128 "hdgst": ${hdgst:-false}, 00:09:34.128 "ddgst": ${ddgst:-false} 00:09:34.128 }, 00:09:34.128 "method": "bdev_nvme_attach_controller" 00:09:34.128 } 00:09:34.128 EOF 00:09:34.128 )") 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.128 { 00:09:34.128 "params": { 00:09:34.128 "name": "Nvme$subsystem", 00:09:34.128 "trtype": "$TEST_TRANSPORT", 00:09:34.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.128 "adrfam": "ipv4", 00:09:34.128 "trsvcid": "$NVMF_PORT", 00:09:34.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.128 "hdgst": ${hdgst:-false}, 00:09:34.128 "ddgst": ${ddgst:-false} 00:09:34.128 }, 00:09:34.128 "method": "bdev_nvme_attach_controller" 00:09:34.128 } 00:09:34.128 EOF 00:09:34.128 )") 00:09:34.128 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.129 { 00:09:34.129 "params": { 00:09:34.129 "name": "Nvme$subsystem", 00:09:34.129 "trtype": "$TEST_TRANSPORT", 00:09:34.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.129 "adrfam": "ipv4", 00:09:34.129 "trsvcid": "$NVMF_PORT", 00:09:34.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.129 "hdgst": ${hdgst:-false}, 00:09:34.129 "ddgst": ${ddgst:-false} 00:09:34.129 }, 00:09:34.129 "method": "bdev_nvme_attach_controller" 00:09:34.129 } 00:09:34.129 EOF 00:09:34.129 )") 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 126240 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.129 "params": { 00:09:34.129 "name": "Nvme1", 00:09:34.129 "trtype": "tcp", 00:09:34.129 "traddr": "10.0.0.2", 00:09:34.129 "adrfam": "ipv4", 00:09:34.129 "trsvcid": "4420", 00:09:34.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.129 "hdgst": false, 00:09:34.129 "ddgst": false 00:09:34.129 }, 00:09:34.129 "method": "bdev_nvme_attach_controller" 00:09:34.129 }' 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.129 "params": { 00:09:34.129 "name": "Nvme1", 00:09:34.129 "trtype": "tcp", 00:09:34.129 "traddr": "10.0.0.2", 00:09:34.129 "adrfam": "ipv4", 00:09:34.129 "trsvcid": "4420", 00:09:34.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.129 "hdgst": false, 00:09:34.129 "ddgst": false 00:09:34.129 }, 00:09:34.129 "method": "bdev_nvme_attach_controller" 00:09:34.129 }' 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.129 "params": { 00:09:34.129 "name": "Nvme1", 00:09:34.129 "trtype": "tcp", 00:09:34.129 "traddr": "10.0.0.2", 00:09:34.129 "adrfam": "ipv4", 00:09:34.129 "trsvcid": "4420", 00:09:34.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.129 "hdgst": false, 00:09:34.129 "ddgst": false 00:09:34.129 }, 00:09:34.129 "method": "bdev_nvme_attach_controller" 00:09:34.129 }' 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.129 19:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.129 "params": { 00:09:34.129 "name": "Nvme1", 00:09:34.129 "trtype": "tcp", 00:09:34.129 "traddr": "10.0.0.2", 00:09:34.129 "adrfam": "ipv4", 00:09:34.129 "trsvcid": "4420", 00:09:34.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.129 "hdgst": false, 00:09:34.129 "ddgst": false 00:09:34.129 }, 00:09:34.129 "method": "bdev_nvme_attach_controller" 00:09:34.129 }' 00:09:34.388 [2024-12-06 19:08:19.197025] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:09:34.388 [2024-12-06 19:08:19.197025] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:09:34.388 [2024-12-06 19:08:19.197030] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:09:34.388 [2024-12-06 19:08:19.197108] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-06 19:08:19.197108] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-06 19:08:19.197107] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:34.388 --proc-type=auto ] 00:09:34.388 --proc-type=auto ] 00:09:34.388 [2024-12-06 19:08:19.197365] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:09:34.388 [2024-12-06 19:08:19.197442] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:34.388 [2024-12-06 19:08:19.379835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.388 [2024-12-06 19:08:19.435282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:34.647 [2024-12-06 19:08:19.481418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.647 [2024-12-06 19:08:19.536367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:34.647 [2024-12-06 19:08:19.608861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.647 [2024-12-06 19:08:19.666248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.647 [2024-12-06 19:08:19.670585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:34.906 [2024-12-06 19:08:19.718001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:34.906 Running I/O for 1 seconds... 00:09:34.906 Running I/O for 1 seconds... 00:09:34.906 Running I/O for 1 seconds... 00:09:34.906 Running I/O for 1 seconds... 00:09:35.843 5779.00 IOPS, 22.57 MiB/s 00:09:35.843 Latency(us) 00:09:35.843 [2024-12-06T18:08:20.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.843 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:35.843 Nvme1n1 : 1.02 5782.51 22.59 0.00 0.00 21842.32 8301.23 34564.17 00:09:35.843 [2024-12-06T18:08:20.892Z] =================================================================================================================== 00:09:35.843 [2024-12-06T18:08:20.892Z] Total : 5782.51 22.59 0.00 0.00 21842.32 8301.23 34564.17 00:09:35.843 5670.00 IOPS, 22.15 MiB/s [2024-12-06T18:08:20.892Z] 189448.00 IOPS, 740.03 MiB/s [2024-12-06T18:08:20.892Z] 9567.00 IOPS, 37.37 MiB/s 00:09:35.843 Latency(us) 00:09:35.843 [2024-12-06T18:08:20.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.843 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:35.843 Nvme1n1 : 1.01 5769.80 22.54 0.00 0.00 22103.25 5509.88 41166.32 00:09:35.843 [2024-12-06T18:08:20.892Z] =================================================================================================================== 00:09:35.843 [2024-12-06T18:08:20.892Z] Total : 5769.80 22.54 0.00 0.00 22103.25 5509.88 41166.32 00:09:35.843 00:09:35.843 Latency(us) 00:09:35.843 [2024-12-06T18:08:20.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.843 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:35.843 Nvme1n1 : 1.00 189096.53 738.66 0.00 0.00 673.21 283.69 1844.72 00:09:35.843 [2024-12-06T18:08:20.892Z] =================================================================================================================== 00:09:35.843 [2024-12-06T18:08:20.892Z] Total : 189096.53 738.66 0.00 0.00 673.21 283.69 1844.72 00:09:35.843 00:09:35.843 Latency(us) 00:09:35.843 [2024-12-06T18:08:20.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.843 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:35.843 Nvme1n1 : 1.01 9632.05 37.63 0.00 0.00 13234.43 5509.88 22524.97 00:09:35.843 [2024-12-06T18:08:20.892Z] =================================================================================================================== 00:09:35.843 [2024-12-06T18:08:20.892Z] Total : 9632.05 37.63 0.00 0.00 13234.43 5509.88 22524.97 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 126242 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 126244 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 126246 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:36.102 rmmod nvme_tcp 00:09:36.102 rmmod nvme_fabrics 00:09:36.102 rmmod nvme_keyring 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 126112 ']' 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 126112 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 126112 ']' 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 126112 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.102 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126112 00:09:36.362 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.362 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.362 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126112' 00:09:36.362 killing process with pid 126112 00:09:36.362 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 126112 00:09:36.362 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 126112 00:09:36.362 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:36.362 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:36.362 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:36.362 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:36.362 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:36.362 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:36.362 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:36.362 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:36.362 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:36.362 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.362 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.362 19:08:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:38.919 00:09:38.919 real 0m7.334s 00:09:38.919 user 0m15.707s 00:09:38.919 sys 0m3.699s 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.919 ************************************ 00:09:38.919 END TEST nvmf_bdev_io_wait 00:09:38.919 ************************************ 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:38.919 ************************************ 00:09:38.919 START TEST nvmf_queue_depth 00:09:38.919 ************************************ 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:38.919 * Looking for test storage... 00:09:38.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:38.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.919 --rc genhtml_branch_coverage=1 00:09:38.919 --rc genhtml_function_coverage=1 00:09:38.919 --rc genhtml_legend=1 00:09:38.919 --rc geninfo_all_blocks=1 00:09:38.919 --rc geninfo_unexecuted_blocks=1 00:09:38.919 00:09:38.919 ' 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:38.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.919 --rc genhtml_branch_coverage=1 00:09:38.919 --rc genhtml_function_coverage=1 00:09:38.919 --rc genhtml_legend=1 00:09:38.919 --rc geninfo_all_blocks=1 00:09:38.919 --rc geninfo_unexecuted_blocks=1 00:09:38.919 00:09:38.919 ' 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:38.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.919 --rc genhtml_branch_coverage=1 00:09:38.919 --rc genhtml_function_coverage=1 00:09:38.919 --rc genhtml_legend=1 00:09:38.919 --rc geninfo_all_blocks=1 00:09:38.919 --rc geninfo_unexecuted_blocks=1 00:09:38.919 00:09:38.919 ' 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:38.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.919 --rc genhtml_branch_coverage=1 00:09:38.919 --rc genhtml_function_coverage=1 00:09:38.919 --rc genhtml_legend=1 00:09:38.919 --rc geninfo_all_blocks=1 00:09:38.919 --rc geninfo_unexecuted_blocks=1 00:09:38.919 00:09:38.919 ' 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.919 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:38.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:38.920 19:08:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:40.822 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:40.822 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:40.822 Found net devices under 0000:84:00.0: cvl_0_0 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:40.822 Found net devices under 0000:84:00.1: cvl_0_1 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:40.822 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:41.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:09:41.080 00:09:41.080 --- 10.0.0.2 ping statistics --- 00:09:41.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.080 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:09:41.080 00:09:41.080 --- 10.0.0.1 ping statistics --- 00:09:41.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.080 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=128488 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 128488 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 128488 ']' 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.080 19:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.080 [2024-12-06 19:08:26.034067] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:09:41.080 [2024-12-06 19:08:26.034134] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.080 [2024-12-06 19:08:26.108245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.338 [2024-12-06 19:08:26.167407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.338 [2024-12-06 19:08:26.167481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.338 [2024-12-06 19:08:26.167510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.338 [2024-12-06 19:08:26.167523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.338 [2024-12-06 19:08:26.167533] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.338 [2024-12-06 19:08:26.168250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.338 [2024-12-06 19:08:26.319254] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.338 Malloc0 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.338 [2024-12-06 19:08:26.367075] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=128515 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 128515 /var/tmp/bdevperf.sock 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 128515 ']' 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:41.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.338 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.596 [2024-12-06 19:08:26.414210] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:09:41.596 [2024-12-06 19:08:26.414288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128515 ] 00:09:41.596 [2024-12-06 19:08:26.480242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.596 [2024-12-06 19:08:26.538075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.854 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.854 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:41.854 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:41.854 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.854 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.854 NVMe0n1 00:09:41.854 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.854 19:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:42.113 Running I/O for 10 seconds... 00:09:43.986 9206.00 IOPS, 35.96 MiB/s [2024-12-06T18:08:30.413Z] 9216.00 IOPS, 36.00 MiB/s [2024-12-06T18:08:30.979Z] 9392.33 IOPS, 36.69 MiB/s [2024-12-06T18:08:32.356Z] 9420.50 IOPS, 36.80 MiB/s [2024-12-06T18:08:33.292Z] 9417.40 IOPS, 36.79 MiB/s [2024-12-06T18:08:34.229Z] 9471.17 IOPS, 37.00 MiB/s [2024-12-06T18:08:35.165Z] 9491.71 IOPS, 37.08 MiB/s [2024-12-06T18:08:36.097Z] 9536.00 IOPS, 37.25 MiB/s [2024-12-06T18:08:37.030Z] 9540.56 IOPS, 37.27 MiB/s [2024-12-06T18:08:37.289Z] 9536.50 IOPS, 37.25 MiB/s 00:09:52.240 Latency(us) 00:09:52.240 [2024-12-06T18:08:37.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.240 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:52.240 Verification LBA range: start 0x0 length 0x4000 00:09:52.240 NVMe0n1 : 10.06 9570.17 37.38 0.00 0.00 106530.20 12524.66 66021.45 00:09:52.240 [2024-12-06T18:08:37.289Z] =================================================================================================================== 00:09:52.240 [2024-12-06T18:08:37.289Z] Total : 9570.17 37.38 0.00 0.00 106530.20 12524.66 66021.45 00:09:52.240 { 00:09:52.240 "results": [ 00:09:52.240 { 00:09:52.240 "job": "NVMe0n1", 00:09:52.240 "core_mask": "0x1", 00:09:52.240 "workload": "verify", 00:09:52.240 "status": "finished", 00:09:52.240 "verify_range": { 00:09:52.240 "start": 0, 00:09:52.240 "length": 16384 00:09:52.240 }, 00:09:52.240 "queue_depth": 1024, 00:09:52.240 "io_size": 4096, 00:09:52.240 "runtime": 10.064921, 00:09:52.240 "iops": 9570.16950257235, 00:09:52.240 "mibps": 37.38347461942324, 00:09:52.240 "io_failed": 0, 00:09:52.240 "io_timeout": 0, 00:09:52.240 "avg_latency_us": 106530.20118996232, 00:09:52.240 "min_latency_us": 12524.657777777778, 00:09:52.240 "max_latency_us": 66021.45185185185 00:09:52.240 } 00:09:52.240 ], 00:09:52.240 "core_count": 1 00:09:52.240 } 00:09:52.240 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 128515 00:09:52.240 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 128515 ']' 00:09:52.240 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 128515 00:09:52.240 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:52.240 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.240 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 128515 00:09:52.240 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.240 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.240 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 128515' 00:09:52.240 killing process with pid 128515 00:09:52.240 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 128515 00:09:52.240 Received shutdown signal, test time was about 10.000000 seconds 00:09:52.240 00:09:52.240 Latency(us) 00:09:52.240 [2024-12-06T18:08:37.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.240 [2024-12-06T18:08:37.289Z] =================================================================================================================== 00:09:52.240 [2024-12-06T18:08:37.289Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:52.240 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 128515 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.498 rmmod nvme_tcp 00:09:52.498 rmmod nvme_fabrics 00:09:52.498 rmmod nvme_keyring 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 128488 ']' 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 128488 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 128488 ']' 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 128488 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 128488 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 128488' 00:09:52.498 killing process with pid 128488 00:09:52.498 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 128488 00:09:52.499 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 128488 00:09:52.757 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.757 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:52.757 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:52.757 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:52.757 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:52.757 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.757 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.757 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.757 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:52.757 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.757 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.757 19:08:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:55.298 00:09:55.298 real 0m16.247s 00:09:55.298 user 0m22.536s 00:09:55.298 sys 0m3.407s 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:55.298 ************************************ 00:09:55.298 END TEST nvmf_queue_depth 00:09:55.298 ************************************ 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.298 ************************************ 00:09:55.298 START TEST nvmf_target_multipath 00:09:55.298 ************************************ 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:55.298 * Looking for test storage... 00:09:55.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:55.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.298 --rc genhtml_branch_coverage=1 00:09:55.298 --rc genhtml_function_coverage=1 00:09:55.298 --rc genhtml_legend=1 00:09:55.298 --rc geninfo_all_blocks=1 00:09:55.298 --rc geninfo_unexecuted_blocks=1 00:09:55.298 00:09:55.298 ' 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:55.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.298 --rc genhtml_branch_coverage=1 00:09:55.298 --rc genhtml_function_coverage=1 00:09:55.298 --rc genhtml_legend=1 00:09:55.298 --rc geninfo_all_blocks=1 00:09:55.298 --rc geninfo_unexecuted_blocks=1 00:09:55.298 00:09:55.298 ' 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:55.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.298 --rc genhtml_branch_coverage=1 00:09:55.298 --rc genhtml_function_coverage=1 00:09:55.298 --rc genhtml_legend=1 00:09:55.298 --rc geninfo_all_blocks=1 00:09:55.298 --rc geninfo_unexecuted_blocks=1 00:09:55.298 00:09:55.298 ' 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:55.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.298 --rc genhtml_branch_coverage=1 00:09:55.298 --rc genhtml_function_coverage=1 00:09:55.298 --rc genhtml_legend=1 00:09:55.298 --rc geninfo_all_blocks=1 00:09:55.298 --rc geninfo_unexecuted_blocks=1 00:09:55.298 00:09:55.298 ' 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.298 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.299 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.299 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:55.299 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:55.299 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:55.299 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:55.299 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.299 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.299 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.299 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.299 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.299 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.299 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.299 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:55.299 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:55.299 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.299 19:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:57.200 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:57.200 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:57.200 Found net devices under 0000:84:00.0: cvl_0_0 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:57.200 Found net devices under 0000:84:00.1: cvl_0_1 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.200 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.201 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:57.201 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:57.201 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.201 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.201 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:57.201 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:57.201 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.201 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.461 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.461 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.461 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:57.461 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.461 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.461 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.461 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:57.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:09:57.462 00:09:57.462 --- 10.0.0.2 ping statistics --- 00:09:57.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.462 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:09:57.462 00:09:57.462 --- 10.0.0.1 ping statistics --- 00:09:57.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.462 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:57.462 only one NIC for nvmf test 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.462 rmmod nvme_tcp 00:09:57.462 rmmod nvme_fabrics 00:09:57.462 rmmod nvme_keyring 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.462 19:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:00.008 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:00.009 00:10:00.009 real 0m4.692s 00:10:00.009 user 0m1.009s 00:10:00.009 sys 0m1.708s 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:00.009 ************************************ 00:10:00.009 END TEST nvmf_target_multipath 00:10:00.009 ************************************ 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.009 ************************************ 00:10:00.009 START TEST nvmf_zcopy 00:10:00.009 ************************************ 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:00.009 * Looking for test storage... 00:10:00.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:00.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.009 --rc genhtml_branch_coverage=1 00:10:00.009 --rc genhtml_function_coverage=1 00:10:00.009 --rc genhtml_legend=1 00:10:00.009 --rc geninfo_all_blocks=1 00:10:00.009 --rc geninfo_unexecuted_blocks=1 00:10:00.009 00:10:00.009 ' 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:00.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.009 --rc genhtml_branch_coverage=1 00:10:00.009 --rc genhtml_function_coverage=1 00:10:00.009 --rc genhtml_legend=1 00:10:00.009 --rc geninfo_all_blocks=1 00:10:00.009 --rc geninfo_unexecuted_blocks=1 00:10:00.009 00:10:00.009 ' 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:00.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.009 --rc genhtml_branch_coverage=1 00:10:00.009 --rc genhtml_function_coverage=1 00:10:00.009 --rc genhtml_legend=1 00:10:00.009 --rc geninfo_all_blocks=1 00:10:00.009 --rc geninfo_unexecuted_blocks=1 00:10:00.009 00:10:00.009 ' 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:00.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.009 --rc genhtml_branch_coverage=1 00:10:00.009 --rc genhtml_function_coverage=1 00:10:00.009 --rc genhtml_legend=1 00:10:00.009 --rc geninfo_all_blocks=1 00:10:00.009 --rc geninfo_unexecuted_blocks=1 00:10:00.009 00:10:00.009 ' 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.009 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:00.010 19:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:01.938 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:01.938 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:01.938 Found net devices under 0000:84:00.0: cvl_0_0 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:01.938 Found net devices under 0000:84:00.1: cvl_0_1 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.938 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.939 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:01.939 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.939 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.939 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:01.939 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:01.939 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.939 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.939 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:01.939 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:01.939 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.939 19:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:02.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:10:02.198 00:10:02.198 --- 10.0.0.2 ping statistics --- 00:10:02.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.198 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:10:02.198 00:10:02.198 --- 10.0.0.1 ping statistics --- 00:10:02.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.198 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=133767 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 133767 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 133767 ']' 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.198 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.198 [2024-12-06 19:08:47.150933] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:10:02.198 [2024-12-06 19:08:47.151043] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.198 [2024-12-06 19:08:47.226266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.457 [2024-12-06 19:08:47.283702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.457 [2024-12-06 19:08:47.283800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.457 [2024-12-06 19:08:47.283830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.457 [2024-12-06 19:08:47.283841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.457 [2024-12-06 19:08:47.283861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.457 [2024-12-06 19:08:47.284594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.457 [2024-12-06 19:08:47.434680] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.457 [2024-12-06 19:08:47.450989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.457 malloc0 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:02.457 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:02.457 { 00:10:02.457 "params": { 00:10:02.457 "name": "Nvme$subsystem", 00:10:02.457 "trtype": "$TEST_TRANSPORT", 00:10:02.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:02.457 "adrfam": "ipv4", 00:10:02.457 "trsvcid": "$NVMF_PORT", 00:10:02.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:02.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:02.458 "hdgst": ${hdgst:-false}, 00:10:02.458 "ddgst": ${ddgst:-false} 00:10:02.458 }, 00:10:02.458 "method": "bdev_nvme_attach_controller" 00:10:02.458 } 00:10:02.458 EOF 00:10:02.458 )") 00:10:02.458 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:02.458 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:02.458 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:02.458 19:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:02.458 "params": { 00:10:02.458 "name": "Nvme1", 00:10:02.458 "trtype": "tcp", 00:10:02.458 "traddr": "10.0.0.2", 00:10:02.458 "adrfam": "ipv4", 00:10:02.458 "trsvcid": "4420", 00:10:02.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:02.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:02.458 "hdgst": false, 00:10:02.458 "ddgst": false 00:10:02.458 }, 00:10:02.458 "method": "bdev_nvme_attach_controller" 00:10:02.458 }' 00:10:02.716 [2024-12-06 19:08:47.539336] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:10:02.716 [2024-12-06 19:08:47.539415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133908 ] 00:10:02.716 [2024-12-06 19:08:47.613327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.716 [2024-12-06 19:08:47.671427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.975 Running I/O for 10 seconds... 00:10:04.865 6334.00 IOPS, 49.48 MiB/s [2024-12-06T18:08:51.286Z] 6388.00 IOPS, 49.91 MiB/s [2024-12-06T18:08:52.218Z] 6333.33 IOPS, 49.48 MiB/s [2024-12-06T18:08:53.151Z] 6376.25 IOPS, 49.81 MiB/s [2024-12-06T18:08:54.083Z] 6396.40 IOPS, 49.97 MiB/s [2024-12-06T18:08:55.017Z] 6408.17 IOPS, 50.06 MiB/s [2024-12-06T18:08:55.959Z] 6430.14 IOPS, 50.24 MiB/s [2024-12-06T18:08:57.335Z] 6431.12 IOPS, 50.24 MiB/s [2024-12-06T18:08:58.270Z] 6446.22 IOPS, 50.36 MiB/s [2024-12-06T18:08:58.270Z] 6445.20 IOPS, 50.35 MiB/s 00:10:13.221 Latency(us) 00:10:13.221 [2024-12-06T18:08:58.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.221 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:13.221 Verification LBA range: start 0x0 length 0x1000 00:10:13.221 Nvme1n1 : 10.01 6449.52 50.39 0.00 0.00 19795.31 2767.08 27767.85 00:10:13.221 [2024-12-06T18:08:58.270Z] =================================================================================================================== 00:10:13.221 [2024-12-06T18:08:58.270Z] Total : 6449.52 50.39 0.00 0.00 19795.31 2767.08 27767.85 00:10:13.221 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=135113 00:10:13.221 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:13.221 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.221 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:13.221 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:13.221 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:13.221 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:13.221 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:13.221 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:13.221 { 00:10:13.221 "params": { 00:10:13.221 "name": "Nvme$subsystem", 00:10:13.221 "trtype": "$TEST_TRANSPORT", 00:10:13.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:13.221 "adrfam": "ipv4", 00:10:13.221 "trsvcid": "$NVMF_PORT", 00:10:13.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:13.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:13.221 "hdgst": ${hdgst:-false}, 00:10:13.221 "ddgst": ${ddgst:-false} 00:10:13.221 }, 00:10:13.221 "method": "bdev_nvme_attach_controller" 00:10:13.221 } 00:10:13.221 EOF 00:10:13.221 )") 00:10:13.221 [2024-12-06 19:08:58.163084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.221 [2024-12-06 19:08:58.163130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.222 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:13.222 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:13.222 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:13.222 19:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:13.222 "params": { 00:10:13.222 "name": "Nvme1", 00:10:13.222 "trtype": "tcp", 00:10:13.222 "traddr": "10.0.0.2", 00:10:13.222 "adrfam": "ipv4", 00:10:13.222 "trsvcid": "4420", 00:10:13.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:13.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:13.222 "hdgst": false, 00:10:13.222 "ddgst": false 00:10:13.222 }, 00:10:13.222 "method": "bdev_nvme_attach_controller" 00:10:13.222 }' 00:10:13.222 [2024-12-06 19:08:58.171038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.222 [2024-12-06 19:08:58.171063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.222 [2024-12-06 19:08:58.179057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.222 [2024-12-06 19:08:58.179092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.222 [2024-12-06 19:08:58.187072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.222 [2024-12-06 19:08:58.187093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.222 [2024-12-06 19:08:58.195094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.222 [2024-12-06 19:08:58.195114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.222 [2024-12-06 19:08:58.203116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.222 [2024-12-06 19:08:58.203137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.222 [2024-12-06 19:08:58.203800] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:10:13.222 [2024-12-06 19:08:58.203875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135113 ] 00:10:13.222 [2024-12-06 19:08:58.211123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.222 [2024-12-06 19:08:58.211144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.222 [2024-12-06 19:08:58.219156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.222 [2024-12-06 19:08:58.219175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.222 [2024-12-06 19:08:58.227177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.222 [2024-12-06 19:08:58.227196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.222 [2024-12-06 19:08:58.235198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.222 [2024-12-06 19:08:58.235218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.222 [2024-12-06 19:08:58.243221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.222 [2024-12-06 19:08:58.243240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.222 [2024-12-06 19:08:58.251243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.222 [2024-12-06 19:08:58.251262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.222 [2024-12-06 19:08:58.259282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.222 [2024-12-06 19:08:58.259302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.222 [2024-12-06 19:08:58.267297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.222 [2024-12-06 19:08:58.267321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.480 [2024-12-06 19:08:58.275317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.480 [2024-12-06 19:08:58.275340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.480 [2024-12-06 19:08:58.276049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.481 [2024-12-06 19:08:58.283356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.283384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.291388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.291425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.299376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.299411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.307397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.307417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.315419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.315439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.323440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.323460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.331461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.331481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.339487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.339506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.339817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.481 [2024-12-06 19:08:58.347507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.347527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.355549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.355578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.363579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.363615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.371604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.371639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.379627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.379664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.387649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.387687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.395667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.395728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.403691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.403750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.411685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.411729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.419755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.419790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.427783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.427824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.435802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.435843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.443794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.443815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.451801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.451822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.459823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.459844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.467857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.467882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.475876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.475900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.483899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.483923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.491920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.491945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.499939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.499962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.507961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.507994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.515983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.516006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.481 [2024-12-06 19:08:58.524005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.481 [2024-12-06 19:08:58.524027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.740 [2024-12-06 19:08:58.532049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.740 [2024-12-06 19:08:58.532088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.740 [2024-12-06 19:08:58.540067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.740 [2024-12-06 19:08:58.540106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.740 [2024-12-06 19:08:58.548082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.740 [2024-12-06 19:08:58.548105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.740 [2024-12-06 19:08:58.556107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.740 [2024-12-06 19:08:58.556128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.740 [2024-12-06 19:08:58.564127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.740 [2024-12-06 19:08:58.564148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.740 [2024-12-06 19:08:58.572133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.740 [2024-12-06 19:08:58.572157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.740 [2024-12-06 19:08:58.580166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.740 [2024-12-06 19:08:58.580187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.588192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.588215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.596210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.596231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.604234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.604254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.612258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.612278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.620281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.620301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.628305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.628326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.636329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.636350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.644349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.644369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.652372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.652392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.660393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.660412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.668415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.668434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.676439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.676460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.684462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.684483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.728045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.728070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.732671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.732693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.740689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.740731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 Running I/O for 5 seconds... 00:10:13.741 [2024-12-06 19:08:58.752235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.752261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.761487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.761512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.772377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.772402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.741 [2024-12-06 19:08:58.784190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.741 [2024-12-06 19:08:58.784215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.794097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.794129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.805361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.805386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.817661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.817685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.827877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.827903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.837982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.838022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.848332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.848358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.858790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.858817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.868485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.868509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.878679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.878718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.889078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.889102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.901414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.901438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.911043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.911084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.921359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.921383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.931321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.931345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.941882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.941909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.952105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.952130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.962154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.962179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.972169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.972194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.982235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.982260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:58.992032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:58.992081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:59.002053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:59.002092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:59.011847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:59.011873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:59.021677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:59.021701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:59.031673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.000 [2024-12-06 19:08:59.031712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.000 [2024-12-06 19:08:59.041789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.001 [2024-12-06 19:08:59.041815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.052851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.052878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.062819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.062845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.072484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.072509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.082552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.082576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.092692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.092742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.102319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.102343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.112137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.112161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.121884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.121915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.131824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.131850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.141974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.142024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.153794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.153820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.163109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.163133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.173204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.173228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.183383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.183420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.193530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.193557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.203799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.203825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.214074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.214099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.224059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.224097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.233898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.233925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.243788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.243814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.254043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.254082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.264139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.264163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.274404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.274428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.284805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.284831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.295084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.295112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.260 [2024-12-06 19:08:59.308255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.260 [2024-12-06 19:08:59.308284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.317850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.317876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.328116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.328141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.338314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.338338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.348647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.348671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.358778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.358804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.368764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.368794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.378833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.378868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.388639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.388663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.399127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.399153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.411409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.411434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.421083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.421108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.431584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.431608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.443529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.443554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.453420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.453445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.464223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.464248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.474092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.474117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.484438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.484462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.494611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.494636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.505090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.505115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.515491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.515516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.525580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.525604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.535419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.535444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.545938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.545966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.557881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.557907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.519 [2024-12-06 19:08:59.567927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.519 [2024-12-06 19:08:59.567955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.778 [2024-12-06 19:08:59.578913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.778 [2024-12-06 19:08:59.578941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.778 [2024-12-06 19:08:59.589635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.778 [2024-12-06 19:08:59.589660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.778 [2024-12-06 19:08:59.600157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.778 [2024-12-06 19:08:59.600182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.778 [2024-12-06 19:08:59.613697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.613747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.623844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.623870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.634245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.634270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.644562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.644587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.654954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.654981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.665318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.665361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.675646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.675671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.686138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.686164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.696273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.696297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.706638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.706663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.717101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.717126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.727697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.727747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.737820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.737847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 12375.00 IOPS, 96.68 MiB/s [2024-12-06T18:08:59.828Z] [2024-12-06 19:08:59.747907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.747934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.758027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.758053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.768348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.768372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.778334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.778359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.789017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.789041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.798627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.798650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.810663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.810686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.779 [2024-12-06 19:08:59.820198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.779 [2024-12-06 19:08:59.820223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:08:59.831299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:08:59.831324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:08:59.844101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:08:59.844126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:08:59.854073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:08:59.854098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:08:59.864363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:08:59.864387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:08:59.875116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:08:59.875141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:08:59.887099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:08:59.887123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:08:59.896858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:08:59.896884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:08:59.907059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:08:59.907083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:08:59.917483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:08:59.917507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:08:59.930177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:08:59.930202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:08:59.940368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:08:59.940392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:08:59.950396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:08:59.950421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:08:59.961215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:08:59.961240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:08:59.973455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:08:59.973480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:08:59.983025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:08:59.983050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:08:59.993332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:08:59.993356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:09:00.005475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:09:00.005505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:09:00.014910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:09:00.014938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:09:00.026656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:09:00.026680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:09:00.038792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:09:00.038819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:09:00.048686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:09:00.048746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:09:00.059637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:09:00.059662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:09:00.071746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:09:00.071784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.039 [2024-12-06 19:09:00.081744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.039 [2024-12-06 19:09:00.081784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.093848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.093876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.104643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.104668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.115854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.115883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.126439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.126464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.138479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.138504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.148747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.148773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.159857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.159883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.171101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.171132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.181414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.181448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.192253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.192279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.204674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.204714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.214206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.214232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.225733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.225760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.236486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.236514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.247206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.247230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.259393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.259418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.269248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.269273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.279473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.279497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.289623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.289650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.299993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.300034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.310106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.310131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.320251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.320276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.330589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.330613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.299 [2024-12-06 19:09:00.341037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.299 [2024-12-06 19:09:00.341064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.558 [2024-12-06 19:09:00.352226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.558 [2024-12-06 19:09:00.352252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.558 [2024-12-06 19:09:00.362914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.558 [2024-12-06 19:09:00.362940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.558 [2024-12-06 19:09:00.376255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.558 [2024-12-06 19:09:00.376283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.558 [2024-12-06 19:09:00.386401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.558 [2024-12-06 19:09:00.386434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.558 [2024-12-06 19:09:00.397099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.558 [2024-12-06 19:09:00.397124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.558 [2024-12-06 19:09:00.407565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.558 [2024-12-06 19:09:00.407590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.558 [2024-12-06 19:09:00.417974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.558 [2024-12-06 19:09:00.418016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.558 [2024-12-06 19:09:00.430949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.558 [2024-12-06 19:09:00.430976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.558 [2024-12-06 19:09:00.441170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.558 [2024-12-06 19:09:00.441196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.558 [2024-12-06 19:09:00.451437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.559 [2024-12-06 19:09:00.451462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.559 [2024-12-06 19:09:00.464158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.559 [2024-12-06 19:09:00.464183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.559 [2024-12-06 19:09:00.475837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.559 [2024-12-06 19:09:00.475864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.559 [2024-12-06 19:09:00.484817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.559 [2024-12-06 19:09:00.484845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.559 [2024-12-06 19:09:00.496030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.559 [2024-12-06 19:09:00.496056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.559 [2024-12-06 19:09:00.507547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.559 [2024-12-06 19:09:00.507572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.559 [2024-12-06 19:09:00.517314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.559 [2024-12-06 19:09:00.517340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.559 [2024-12-06 19:09:00.528296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.559 [2024-12-06 19:09:00.528324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.559 [2024-12-06 19:09:00.540381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.559 [2024-12-06 19:09:00.540406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.559 [2024-12-06 19:09:00.549617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.559 [2024-12-06 19:09:00.549643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.559 [2024-12-06 19:09:00.560596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.559 [2024-12-06 19:09:00.560622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.559 [2024-12-06 19:09:00.571047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.559 [2024-12-06 19:09:00.571072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.559 [2024-12-06 19:09:00.581089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.559 [2024-12-06 19:09:00.581115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.559 [2024-12-06 19:09:00.591568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.559 [2024-12-06 19:09:00.591601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.559 [2024-12-06 19:09:00.604535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.559 [2024-12-06 19:09:00.604561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.818 [2024-12-06 19:09:00.615185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.818 [2024-12-06 19:09:00.615210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.818 [2024-12-06 19:09:00.625104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.818 [2024-12-06 19:09:00.625129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.818 [2024-12-06 19:09:00.635308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.818 [2024-12-06 19:09:00.635333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.818 [2024-12-06 19:09:00.645529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.818 [2024-12-06 19:09:00.645553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.818 [2024-12-06 19:09:00.655618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.818 [2024-12-06 19:09:00.655643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.818 [2024-12-06 19:09:00.665410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.818 [2024-12-06 19:09:00.665435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.818 [2024-12-06 19:09:00.675819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.818 [2024-12-06 19:09:00.675846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.818 [2024-12-06 19:09:00.686523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.818 [2024-12-06 19:09:00.686552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.818 [2024-12-06 19:09:00.696805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.818 [2024-12-06 19:09:00.696831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.818 [2024-12-06 19:09:00.709577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.818 [2024-12-06 19:09:00.709601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.818 [2024-12-06 19:09:00.720475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.818 [2024-12-06 19:09:00.720499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.818 [2024-12-06 19:09:00.729496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.818 [2024-12-06 19:09:00.729521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.818 [2024-12-06 19:09:00.740322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.818 [2024-12-06 19:09:00.740347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.818 12274.00 IOPS, 95.89 MiB/s [2024-12-06T18:09:00.867Z] [2024-12-06 19:09:00.752676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.818 [2024-12-06 19:09:00.752715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.819 [2024-12-06 19:09:00.762008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.819 [2024-12-06 19:09:00.762034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.819 [2024-12-06 19:09:00.772017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.819 [2024-12-06 19:09:00.772057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.819 [2024-12-06 19:09:00.782193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.819 [2024-12-06 19:09:00.782219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.819 [2024-12-06 19:09:00.792256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.819 [2024-12-06 19:09:00.792281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.819 [2024-12-06 19:09:00.802511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.819 [2024-12-06 19:09:00.802538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.819 [2024-12-06 19:09:00.812799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.819 [2024-12-06 19:09:00.812830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.819 [2024-12-06 19:09:00.823044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.819 [2024-12-06 19:09:00.823084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.819 [2024-12-06 19:09:00.832685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.819 [2024-12-06 19:09:00.832734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.819 [2024-12-06 19:09:00.844560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.819 [2024-12-06 19:09:00.844584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.819 [2024-12-06 19:09:00.854506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.819 [2024-12-06 19:09:00.854532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.819 [2024-12-06 19:09:00.864632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.819 [2024-12-06 19:09:00.864659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:00.875239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:00.875264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:00.885428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:00.885453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:00.895854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:00.895881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:00.905903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:00.905928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:00.916157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:00.916183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:00.926148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:00.926173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:00.936336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:00.936360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:00.946459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:00.946483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:00.956545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:00.956571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:00.966868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:00.966894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:00.979134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:00.979159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:00.988990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:00.989042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:00.999356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:00.999380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:01.009618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:01.009642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:01.020143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:01.020168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:01.030389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:01.030413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:01.040281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:01.040309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:01.050699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:01.050748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:01.060839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:01.060865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:01.070544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:01.070569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:01.080966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:01.080992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:01.093137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:01.093162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:01.104405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:01.104430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:01.113022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:01.113048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.078 [2024-12-06 19:09:01.125651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.078 [2024-12-06 19:09:01.125676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.338 [2024-12-06 19:09:01.136024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.338 [2024-12-06 19:09:01.136049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.338 [2024-12-06 19:09:01.146507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.338 [2024-12-06 19:09:01.146531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.338 [2024-12-06 19:09:01.157082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.338 [2024-12-06 19:09:01.157121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.338 [2024-12-06 19:09:01.169077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.338 [2024-12-06 19:09:01.169118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.338 [2024-12-06 19:09:01.181517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.338 [2024-12-06 19:09:01.181541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.338 [2024-12-06 19:09:01.191954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.338 [2024-12-06 19:09:01.191981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.338 [2024-12-06 19:09:01.202262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.338 [2024-12-06 19:09:01.202286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.338 [2024-12-06 19:09:01.212118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.338 [2024-12-06 19:09:01.212143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.338 [2024-12-06 19:09:01.222608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.338 [2024-12-06 19:09:01.222633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.338 [2024-12-06 19:09:01.233184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.338 [2024-12-06 19:09:01.233208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.338 [2024-12-06 19:09:01.244213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.338 [2024-12-06 19:09:01.244237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.339 [2024-12-06 19:09:01.255049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.339 [2024-12-06 19:09:01.255100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.339 [2024-12-06 19:09:01.266833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.339 [2024-12-06 19:09:01.266859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.339 [2024-12-06 19:09:01.276562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.339 [2024-12-06 19:09:01.276586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.339 [2024-12-06 19:09:01.286317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.339 [2024-12-06 19:09:01.286342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.339 [2024-12-06 19:09:01.296572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.339 [2024-12-06 19:09:01.296596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.339 [2024-12-06 19:09:01.306878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.339 [2024-12-06 19:09:01.306904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.339 [2024-12-06 19:09:01.317209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.339 [2024-12-06 19:09:01.317234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.339 [2024-12-06 19:09:01.327408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.339 [2024-12-06 19:09:01.327432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.339 [2024-12-06 19:09:01.338043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.339 [2024-12-06 19:09:01.338082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.339 [2024-12-06 19:09:01.347741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.339 [2024-12-06 19:09:01.347767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.339 [2024-12-06 19:09:01.357603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.339 [2024-12-06 19:09:01.357627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.339 [2024-12-06 19:09:01.367347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.339 [2024-12-06 19:09:01.367371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.339 [2024-12-06 19:09:01.377427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.339 [2024-12-06 19:09:01.377452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.598 [2024-12-06 19:09:01.388268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.598 [2024-12-06 19:09:01.388293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.598 [2024-12-06 19:09:01.398854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.598 [2024-12-06 19:09:01.398882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.598 [2024-12-06 19:09:01.409470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.598 [2024-12-06 19:09:01.409494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.598 [2024-12-06 19:09:01.421812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.598 [2024-12-06 19:09:01.421840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.598 [2024-12-06 19:09:01.431806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.598 [2024-12-06 19:09:01.431834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.598 [2024-12-06 19:09:01.441768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.598 [2024-12-06 19:09:01.441794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.598 [2024-12-06 19:09:01.452262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.598 [2024-12-06 19:09:01.452286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.598 [2024-12-06 19:09:01.464166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.598 [2024-12-06 19:09:01.464191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.598 [2024-12-06 19:09:01.473573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.598 [2024-12-06 19:09:01.473597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.598 [2024-12-06 19:09:01.486264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.598 [2024-12-06 19:09:01.486288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.598 [2024-12-06 19:09:01.496163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.598 [2024-12-06 19:09:01.496190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.598 [2024-12-06 19:09:01.506267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.599 [2024-12-06 19:09:01.506293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.599 [2024-12-06 19:09:01.516762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.599 [2024-12-06 19:09:01.516790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.599 [2024-12-06 19:09:01.527042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.599 [2024-12-06 19:09:01.527068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.599 [2024-12-06 19:09:01.537634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.599 [2024-12-06 19:09:01.537661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.599 [2024-12-06 19:09:01.548020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.599 [2024-12-06 19:09:01.548046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.599 [2024-12-06 19:09:01.558259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.599 [2024-12-06 19:09:01.558283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.599 [2024-12-06 19:09:01.568365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.599 [2024-12-06 19:09:01.568390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.599 [2024-12-06 19:09:01.578524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.599 [2024-12-06 19:09:01.578557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.599 [2024-12-06 19:09:01.589048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.599 [2024-12-06 19:09:01.589073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.599 [2024-12-06 19:09:01.601208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.599 [2024-12-06 19:09:01.601232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.599 [2024-12-06 19:09:01.611141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.599 [2024-12-06 19:09:01.611165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.599 [2024-12-06 19:09:01.621333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.599 [2024-12-06 19:09:01.621358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.599 [2024-12-06 19:09:01.631682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.599 [2024-12-06 19:09:01.631730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.599 [2024-12-06 19:09:01.641653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.599 [2024-12-06 19:09:01.641678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.653174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.653200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.664993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.665044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.674928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.674955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.685448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.685472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.695487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.695513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.705435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.705460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.715277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.715303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.725098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.725122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.735328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.735354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.745454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.745478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 12309.67 IOPS, 96.17 MiB/s [2024-12-06T18:09:01.908Z] [2024-12-06 19:09:01.756136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.756161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.766122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.766147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.776414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.776447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.786440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.786465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.796327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.796351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.805765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.805792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.815694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.815742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.825583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.825607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.835842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.835868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.845803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.845831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.855877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.855902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.866394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.866418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.878139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.878164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.887903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.887929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.859 [2024-12-06 19:09:01.897846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.859 [2024-12-06 19:09:01.897872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:01.909274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:01.909300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:01.920142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:01.920168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:01.930942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:01.930970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:01.943293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:01.943319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:01.952872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:01.952898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:01.963083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:01.963108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:01.974078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:01.974116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:01.984281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:01.984306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:01.994149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:01.994173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:02.004319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:02.004344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:02.014817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:02.014847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:02.025181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:02.025207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:02.035043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:02.035067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:02.045341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:02.045366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:02.057272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:02.057296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:02.066880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:02.066906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:02.077132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:02.077157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:02.087204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:02.087229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:02.097496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:02.097521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:02.108024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:02.108050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:02.119295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:02.119320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:02.129536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:02.129561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:02.139552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:02.139577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:02.149730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:02.149756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.119 [2024-12-06 19:09:02.159571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.119 [2024-12-06 19:09:02.159595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.378 [2024-12-06 19:09:02.170410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.378 [2024-12-06 19:09:02.170437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.378 [2024-12-06 19:09:02.181205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.378 [2024-12-06 19:09:02.181231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.378 [2024-12-06 19:09:02.193383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.378 [2024-12-06 19:09:02.193407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.378 [2024-12-06 19:09:02.202381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.378 [2024-12-06 19:09:02.202405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.378 [2024-12-06 19:09:02.213284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.378 [2024-12-06 19:09:02.213309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.378 [2024-12-06 19:09:02.223419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.378 [2024-12-06 19:09:02.223443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.378 [2024-12-06 19:09:02.233466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.378 [2024-12-06 19:09:02.233491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.378 [2024-12-06 19:09:02.243634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.378 [2024-12-06 19:09:02.243659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.378 [2024-12-06 19:09:02.253983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.378 [2024-12-06 19:09:02.254023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.378 [2024-12-06 19:09:02.265744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.378 [2024-12-06 19:09:02.265769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.378 [2024-12-06 19:09:02.275120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.378 [2024-12-06 19:09:02.275145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.378 [2024-12-06 19:09:02.287421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.378 [2024-12-06 19:09:02.287445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.378 [2024-12-06 19:09:02.297305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.378 [2024-12-06 19:09:02.297330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.378 [2024-12-06 19:09:02.307076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.378 [2024-12-06 19:09:02.307101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.378 [2024-12-06 19:09:02.317239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.378 [2024-12-06 19:09:02.317265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.378 [2024-12-06 19:09:02.327982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.378 [2024-12-06 19:09:02.328022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.378 [2024-12-06 19:09:02.338106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.378 [2024-12-06 19:09:02.338130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.379 [2024-12-06 19:09:02.348586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.379 [2024-12-06 19:09:02.348611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.379 [2024-12-06 19:09:02.358305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.379 [2024-12-06 19:09:02.358330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.379 [2024-12-06 19:09:02.368974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.379 [2024-12-06 19:09:02.369016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.379 [2024-12-06 19:09:02.380813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.379 [2024-12-06 19:09:02.380839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.379 [2024-12-06 19:09:02.390786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.379 [2024-12-06 19:09:02.390812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.379 [2024-12-06 19:09:02.401234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.379 [2024-12-06 19:09:02.401259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.379 [2024-12-06 19:09:02.411326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.379 [2024-12-06 19:09:02.411351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.379 [2024-12-06 19:09:02.421270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.379 [2024-12-06 19:09:02.421295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.637 [2024-12-06 19:09:02.432742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.637 [2024-12-06 19:09:02.432769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.637 [2024-12-06 19:09:02.443386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.637 [2024-12-06 19:09:02.443411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.637 [2024-12-06 19:09:02.453693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.637 [2024-12-06 19:09:02.453743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.637 [2024-12-06 19:09:02.466360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.637 [2024-12-06 19:09:02.466385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.637 [2024-12-06 19:09:02.475603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.637 [2024-12-06 19:09:02.475627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.637 [2024-12-06 19:09:02.485938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.637 [2024-12-06 19:09:02.485964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.637 [2024-12-06 19:09:02.496387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.637 [2024-12-06 19:09:02.496412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.637 [2024-12-06 19:09:02.508428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.637 [2024-12-06 19:09:02.508453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.637 [2024-12-06 19:09:02.520038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.637 [2024-12-06 19:09:02.520064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.637 [2024-12-06 19:09:02.529055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.637 [2024-12-06 19:09:02.529095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.637 [2024-12-06 19:09:02.539645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.637 [2024-12-06 19:09:02.539670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.637 [2024-12-06 19:09:02.549664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.637 [2024-12-06 19:09:02.549689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.637 [2024-12-06 19:09:02.559881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.637 [2024-12-06 19:09:02.559910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.637 [2024-12-06 19:09:02.569700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.637 [2024-12-06 19:09:02.569753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.637 [2024-12-06 19:09:02.579957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.637 [2024-12-06 19:09:02.579985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.638 [2024-12-06 19:09:02.590159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.638 [2024-12-06 19:09:02.590183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.638 [2024-12-06 19:09:02.600375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.638 [2024-12-06 19:09:02.600400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.638 [2024-12-06 19:09:02.610604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.638 [2024-12-06 19:09:02.610628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.638 [2024-12-06 19:09:02.620587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.638 [2024-12-06 19:09:02.620611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.638 [2024-12-06 19:09:02.630742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.638 [2024-12-06 19:09:02.630770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.638 [2024-12-06 19:09:02.640619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.638 [2024-12-06 19:09:02.640643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.638 [2024-12-06 19:09:02.650618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.638 [2024-12-06 19:09:02.650643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.638 [2024-12-06 19:09:02.660894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.638 [2024-12-06 19:09:02.660922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.638 [2024-12-06 19:09:02.671097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.638 [2024-12-06 19:09:02.671123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.638 [2024-12-06 19:09:02.681687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.638 [2024-12-06 19:09:02.681740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-12-06 19:09:02.692896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.692924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-12-06 19:09:02.704856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.704884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-12-06 19:09:02.714743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.714771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-12-06 19:09:02.725734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.725762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-12-06 19:09:02.738428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.738453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-12-06 19:09:02.748490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.748515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 12332.25 IOPS, 96.35 MiB/s [2024-12-06T18:09:02.945Z] [2024-12-06 19:09:02.759309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.759341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-12-06 19:09:02.772030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.772056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-12-06 19:09:02.782089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.782114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-12-06 19:09:02.792019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.792045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-12-06 19:09:02.802505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.802530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-12-06 19:09:02.812676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.812716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-12-06 19:09:02.823602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.823628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-12-06 19:09:02.836058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.836100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-12-06 19:09:02.846299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.846324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-12-06 19:09:02.856630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.856655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-12-06 19:09:02.867205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.867230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-12-06 19:09:02.877551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-12-06 19:09:02.877576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.897 [2024-12-06 19:09:02.888274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.897 [2024-12-06 19:09:02.888299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.897 [2024-12-06 19:09:02.900462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.897 [2024-12-06 19:09:02.900486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.897 [2024-12-06 19:09:02.910281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.897 [2024-12-06 19:09:02.910305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.897 [2024-12-06 19:09:02.920733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.897 [2024-12-06 19:09:02.920758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.897 [2024-12-06 19:09:02.931311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.897 [2024-12-06 19:09:02.931336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.897 [2024-12-06 19:09:02.944417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.897 [2024-12-06 19:09:02.944442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:02.954588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:02.954613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:02.964790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:02.964822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:02.975476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:02.975501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:02.987685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:02.987733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:02.997276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:02.997300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:03.007478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:03.007502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:03.019898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:03.019924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:03.029740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:03.029792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:03.039589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:03.039614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:03.049648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:03.049673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:03.059466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:03.059491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:03.070023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:03.070049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:03.080285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:03.080310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:03.090606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:03.090630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:03.103129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:03.103153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:03.112904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:03.112930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:03.122758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:03.122785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:03.133488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:03.133512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:03.143633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:03.143657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:03.154303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.154 [2024-12-06 19:09:03.154328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.154 [2024-12-06 19:09:03.164662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.155 [2024-12-06 19:09:03.164694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.155 [2024-12-06 19:09:03.177334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.155 [2024-12-06 19:09:03.177359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.155 [2024-12-06 19:09:03.196936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.155 [2024-12-06 19:09:03.196963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.207868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.207895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.219362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.219386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.228229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.228254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.238800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.238825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.249341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.249366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.259943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.259968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.272277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.272301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.282071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.282097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.292311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.292335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.302346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.302370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.312333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.312358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.322378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.322403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.332912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.332939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.343690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.343737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.356503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.356529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.366694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.366741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.376943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.376969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.387437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.387463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.399788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.399815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.408460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.408484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.421123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.421147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.431162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.431186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.441578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.441602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.413 [2024-12-06 19:09:03.453831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.413 [2024-12-06 19:09:03.453857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.464337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.464363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.474277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.474301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.484663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.484688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.497485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.497510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.507373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.507399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.517824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.517852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.528317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.528341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.540847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.540873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.550480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.550505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.560628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.560653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.570964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.570991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.581137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.581163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.591227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.591252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.601490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.601514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.614517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.614542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.626026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.626052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.634620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.634645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.646957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.646983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.658459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.658483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.667650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.667675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.679056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.679095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.690552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.690576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.700093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.700118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.672 [2024-12-06 19:09:03.710694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.672 [2024-12-06 19:09:03.710744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.723737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.723765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.733648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.733673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.743854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.743880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.753849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.753875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 12326.20 IOPS, 96.30 MiB/s [2024-12-06T18:09:03.979Z] [2024-12-06 19:09:03.763124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.763147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 00:10:18.930 Latency(us) 00:10:18.930 [2024-12-06T18:09:03.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.930 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:18.930 Nvme1n1 : 5.01 12328.54 96.32 0.00 0.00 10369.59 4781.70 17476.27 00:10:18.930 [2024-12-06T18:09:03.979Z] =================================================================================================================== 00:10:18.930 [2024-12-06T18:09:03.979Z] Total : 12328.54 96.32 0.00 0.00 10369.59 4781.70 17476.27 00:10:18.930 [2024-12-06 19:09:03.769419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.769442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.777437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.777460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.785457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.785478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.793542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.793598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.801562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.801612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.809578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.809622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.817598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.817645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.825616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.825660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.833654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.833710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.841656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.841702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.849677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.849736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.857704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.857758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.865740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.865791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.873760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.873812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.881786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.881830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.889818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.889864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.897829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.897884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.905855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.905900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.913833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.913856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.921839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.921860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.929855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.929876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.937875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.937895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.945925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.945954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.953969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.954011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.961979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.962020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.969959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.969979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.930 [2024-12-06 19:09:03.977989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.930 [2024-12-06 19:09:03.978031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-12-06 19:09:03.986026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-12-06 19:09:03.986050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (135113) - No such process 00:10:19.188 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 135113 00:10:19.188 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.188 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.188 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.188 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.188 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:19.188 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.188 19:09:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.188 delay0 00:10:19.188 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.188 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:19.188 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.188 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.188 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.188 19:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:19.188 [2024-12-06 19:09:04.153896] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:25.743 Initializing NVMe Controllers 00:10:25.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:25.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:25.743 Initialization complete. Launching workers. 00:10:25.743 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1997 00:10:25.743 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 2284, failed to submit 33 00:10:25.743 success 2154, unsuccessful 130, failed 0 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:25.743 rmmod nvme_tcp 00:10:25.743 rmmod nvme_fabrics 00:10:25.743 rmmod nvme_keyring 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 133767 ']' 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 133767 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 133767 ']' 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 133767 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 133767 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 133767' 00:10:25.743 killing process with pid 133767 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 133767 00:10:25.743 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 133767 00:10:26.003 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:26.003 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:26.003 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:26.003 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:26.003 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:26.003 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:26.003 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:26.003 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:26.003 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:26.003 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.003 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.003 19:09:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:28.544 00:10:28.544 real 0m28.519s 00:10:28.544 user 0m42.044s 00:10:28.544 sys 0m8.939s 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:28.544 ************************************ 00:10:28.544 END TEST nvmf_zcopy 00:10:28.544 ************************************ 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:28.544 ************************************ 00:10:28.544 START TEST nvmf_nmic 00:10:28.544 ************************************ 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:28.544 * Looking for test storage... 00:10:28.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.544 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:28.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.545 --rc genhtml_branch_coverage=1 00:10:28.545 --rc genhtml_function_coverage=1 00:10:28.545 --rc genhtml_legend=1 00:10:28.545 --rc geninfo_all_blocks=1 00:10:28.545 --rc geninfo_unexecuted_blocks=1 00:10:28.545 00:10:28.545 ' 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:28.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.545 --rc genhtml_branch_coverage=1 00:10:28.545 --rc genhtml_function_coverage=1 00:10:28.545 --rc genhtml_legend=1 00:10:28.545 --rc geninfo_all_blocks=1 00:10:28.545 --rc geninfo_unexecuted_blocks=1 00:10:28.545 00:10:28.545 ' 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:28.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.545 --rc genhtml_branch_coverage=1 00:10:28.545 --rc genhtml_function_coverage=1 00:10:28.545 --rc genhtml_legend=1 00:10:28.545 --rc geninfo_all_blocks=1 00:10:28.545 --rc geninfo_unexecuted_blocks=1 00:10:28.545 00:10:28.545 ' 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:28.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.545 --rc genhtml_branch_coverage=1 00:10:28.545 --rc genhtml_function_coverage=1 00:10:28.545 --rc genhtml_legend=1 00:10:28.545 --rc geninfo_all_blocks=1 00:10:28.545 --rc geninfo_unexecuted_blocks=1 00:10:28.545 00:10:28.545 ' 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:28.545 19:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:30.455 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:30.455 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:30.455 Found net devices under 0000:84:00.0: cvl_0_0 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:30.455 Found net devices under 0000:84:00.1: cvl_0_1 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:30.455 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:30.713 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:30.713 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:30.713 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:30.713 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:30.713 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:30.713 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:30.713 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:30.713 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:30.713 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:30.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:10:30.714 00:10:30.714 --- 10.0.0.2 ping statistics --- 00:10:30.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.714 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:30.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:10:30.714 00:10:30.714 --- 10.0.0.1 ping statistics --- 00:10:30.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.714 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=139272 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 139272 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 139272 ']' 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.714 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.714 [2024-12-06 19:09:15.692101] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:10:30.714 [2024-12-06 19:09:15.692181] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.972 [2024-12-06 19:09:15.764478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.972 [2024-12-06 19:09:15.821597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.972 [2024-12-06 19:09:15.821660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.972 [2024-12-06 19:09:15.821688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.972 [2024-12-06 19:09:15.821700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.973 [2024-12-06 19:09:15.821710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.973 [2024-12-06 19:09:15.823378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.973 [2024-12-06 19:09:15.823488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.973 [2024-12-06 19:09:15.823604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.973 [2024-12-06 19:09:15.823612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.973 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.973 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:30.973 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:30.973 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:30.973 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.973 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.973 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:30.973 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.973 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.973 [2024-12-06 19:09:15.977757] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.973 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.973 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:30.973 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.973 19:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.231 Malloc0 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.231 [2024-12-06 19:09:16.050174] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:31.231 test case1: single bdev can't be used in multiple subsystems 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.231 [2024-12-06 19:09:16.073965] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:31.231 [2024-12-06 19:09:16.074019] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:31.231 [2024-12-06 19:09:16.074035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.231 request: 00:10:31.231 { 00:10:31.231 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:31.231 "namespace": { 00:10:31.231 "bdev_name": "Malloc0", 00:10:31.231 "no_auto_visible": false, 00:10:31.231 "hide_metadata": false 00:10:31.231 }, 00:10:31.231 "method": "nvmf_subsystem_add_ns", 00:10:31.231 "req_id": 1 00:10:31.231 } 00:10:31.231 Got JSON-RPC error response 00:10:31.231 response: 00:10:31.231 { 00:10:31.231 "code": -32602, 00:10:31.231 "message": "Invalid parameters" 00:10:31.231 } 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:31.231 Adding namespace failed - expected result. 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:31.231 test case2: host connect to nvmf target in multiple paths 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.231 [2024-12-06 19:09:16.082114] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.231 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:31.798 19:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:32.363 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:32.363 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:32.363 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:32.363 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:32.363 19:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:34.889 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:34.889 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:34.889 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:34.889 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:34.889 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:34.889 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:34.889 19:09:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:34.889 [global] 00:10:34.889 thread=1 00:10:34.889 invalidate=1 00:10:34.889 rw=write 00:10:34.889 time_based=1 00:10:34.889 runtime=1 00:10:34.889 ioengine=libaio 00:10:34.889 direct=1 00:10:34.889 bs=4096 00:10:34.889 iodepth=1 00:10:34.889 norandommap=0 00:10:34.889 numjobs=1 00:10:34.889 00:10:34.889 verify_dump=1 00:10:34.889 verify_backlog=512 00:10:34.889 verify_state_save=0 00:10:34.889 do_verify=1 00:10:34.889 verify=crc32c-intel 00:10:34.889 [job0] 00:10:34.889 filename=/dev/nvme0n1 00:10:34.889 Could not set queue depth (nvme0n1) 00:10:34.889 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.889 fio-3.35 00:10:34.889 Starting 1 thread 00:10:36.260 00:10:36.260 job0: (groupid=0, jobs=1): err= 0: pid=139794: Fri Dec 6 19:09:21 2024 00:10:36.260 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:36.260 slat (nsec): min=6361, max=54446, avg=12437.19, stdev=5370.78 00:10:36.260 clat (usec): min=169, max=474, avg=240.82, stdev=34.74 00:10:36.260 lat (usec): min=176, max=491, avg=253.26, stdev=38.72 00:10:36.260 clat percentiles (usec): 00:10:36.260 | 1.00th=[ 180], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 206], 00:10:36.260 | 30.00th=[ 219], 40.00th=[ 229], 50.00th=[ 241], 60.00th=[ 251], 00:10:36.260 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 297], 00:10:36.260 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 441], 99.95th=[ 461], 00:10:36.260 | 99.99th=[ 474] 00:10:36.260 write: IOPS=2167, BW=8671KiB/s (8879kB/s)(8680KiB/1001msec); 0 zone resets 00:10:36.260 slat (usec): min=7, max=28233, avg=29.54, stdev=605.77 00:10:36.261 clat (usec): min=123, max=792, avg=184.16, stdev=41.63 00:10:36.261 lat (usec): min=132, max=28453, avg=213.69, stdev=608.29 00:10:36.261 clat percentiles (usec): 00:10:36.261 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 147], 00:10:36.261 | 30.00th=[ 157], 40.00th=[ 172], 50.00th=[ 182], 60.00th=[ 188], 00:10:36.261 | 70.00th=[ 200], 80.00th=[ 212], 90.00th=[ 231], 95.00th=[ 255], 00:10:36.261 | 99.00th=[ 310], 99.50th=[ 359], 99.90th=[ 392], 99.95th=[ 396], 00:10:36.261 | 99.99th=[ 791] 00:10:36.261 bw ( KiB/s): min=10280, max=10280, per=100.00%, avg=10280.00, stdev= 0.00, samples=1 00:10:36.261 iops : min= 2570, max= 2570, avg=2570.00, stdev= 0.00, samples=1 00:10:36.261 lat (usec) : 250=76.79%, 500=23.19%, 1000=0.02% 00:10:36.261 cpu : usr=4.20%, sys=8.50%, ctx=4221, majf=0, minf=1 00:10:36.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.261 issued rwts: total=2048,2170,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.261 00:10:36.261 Run status group 0 (all jobs): 00:10:36.261 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:36.261 WRITE: bw=8671KiB/s (8879kB/s), 8671KiB/s-8671KiB/s (8879kB/s-8879kB/s), io=8680KiB (8888kB), run=1001-1001msec 00:10:36.261 00:10:36.261 Disk stats (read/write): 00:10:36.261 nvme0n1: ios=1819/2048, merge=0/0, ticks=1416/364, in_queue=1780, util=98.60% 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:36.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:36.261 rmmod nvme_tcp 00:10:36.261 rmmod nvme_fabrics 00:10:36.261 rmmod nvme_keyring 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 139272 ']' 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 139272 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 139272 ']' 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 139272 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.261 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 139272 00:10:36.520 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.520 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.520 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 139272' 00:10:36.520 killing process with pid 139272 00:10:36.520 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 139272 00:10:36.520 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 139272 00:10:36.520 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:36.520 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:36.520 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:36.520 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:36.520 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:36.520 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:36.520 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:36.520 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.520 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:36.520 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.520 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.520 19:09:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.066 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:39.066 00:10:39.066 real 0m10.506s 00:10:39.066 user 0m23.889s 00:10:39.066 sys 0m3.012s 00:10:39.066 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.066 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.066 ************************************ 00:10:39.066 END TEST nvmf_nmic 00:10:39.066 ************************************ 00:10:39.066 19:09:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:39.066 19:09:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:39.066 19:09:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.066 19:09:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:39.066 ************************************ 00:10:39.066 START TEST nvmf_fio_target 00:10:39.066 ************************************ 00:10:39.066 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:39.066 * Looking for test storage... 00:10:39.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.066 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:39.066 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:39.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.067 --rc genhtml_branch_coverage=1 00:10:39.067 --rc genhtml_function_coverage=1 00:10:39.067 --rc genhtml_legend=1 00:10:39.067 --rc geninfo_all_blocks=1 00:10:39.067 --rc geninfo_unexecuted_blocks=1 00:10:39.067 00:10:39.067 ' 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:39.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.067 --rc genhtml_branch_coverage=1 00:10:39.067 --rc genhtml_function_coverage=1 00:10:39.067 --rc genhtml_legend=1 00:10:39.067 --rc geninfo_all_blocks=1 00:10:39.067 --rc geninfo_unexecuted_blocks=1 00:10:39.067 00:10:39.067 ' 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:39.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.067 --rc genhtml_branch_coverage=1 00:10:39.067 --rc genhtml_function_coverage=1 00:10:39.067 --rc genhtml_legend=1 00:10:39.067 --rc geninfo_all_blocks=1 00:10:39.067 --rc geninfo_unexecuted_blocks=1 00:10:39.067 00:10:39.067 ' 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:39.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.067 --rc genhtml_branch_coverage=1 00:10:39.067 --rc genhtml_function_coverage=1 00:10:39.067 --rc genhtml_legend=1 00:10:39.067 --rc geninfo_all_blocks=1 00:10:39.067 --rc geninfo_unexecuted_blocks=1 00:10:39.067 00:10:39.067 ' 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:39.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:39.067 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:39.068 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:39.068 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:39.068 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.068 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:39.068 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:39.068 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:39.068 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.068 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.068 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.068 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:39.068 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:39.068 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:39.068 19:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.597 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:41.598 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:41.598 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:41.598 Found net devices under 0000:84:00.0: cvl_0_0 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:41.598 Found net devices under 0000:84:00.1: cvl_0_1 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:41.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:10:41.598 00:10:41.598 --- 10.0.0.2 ping statistics --- 00:10:41.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.598 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:41.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:10:41.598 00:10:41.598 --- 10.0.0.1 ping statistics --- 00:10:41.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.598 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:10:41.598 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=142019 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 142019 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 142019 ']' 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.599 [2024-12-06 19:09:26.241405] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:10:41.599 [2024-12-06 19:09:26.241501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.599 [2024-12-06 19:09:26.311377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.599 [2024-12-06 19:09:26.368405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.599 [2024-12-06 19:09:26.368447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.599 [2024-12-06 19:09:26.368475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.599 [2024-12-06 19:09:26.368485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.599 [2024-12-06 19:09:26.368494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.599 [2024-12-06 19:09:26.370091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.599 [2024-12-06 19:09:26.370147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.599 [2024-12-06 19:09:26.370216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.599 [2024-12-06 19:09:26.370219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.599 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:41.856 [2024-12-06 19:09:26.745843] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.856 19:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.114 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:42.114 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.372 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:42.372 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.630 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:42.630 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.196 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:43.196 19:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:43.196 19:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.763 19:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:43.763 19:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.763 19:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:43.763 19:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:44.330 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:44.330 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:44.589 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:44.847 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:44.847 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:45.105 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:45.105 19:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:45.363 19:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.621 [2024-12-06 19:09:30.449684] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.621 19:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:45.879 19:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:46.138 19:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:46.704 19:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:46.704 19:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:46.704 19:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.704 19:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:46.704 19:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:46.704 19:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:48.600 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:48.600 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:48.600 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.600 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:48.600 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.600 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:48.600 19:09:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:48.600 [global] 00:10:48.600 thread=1 00:10:48.600 invalidate=1 00:10:48.600 rw=write 00:10:48.600 time_based=1 00:10:48.600 runtime=1 00:10:48.600 ioengine=libaio 00:10:48.600 direct=1 00:10:48.600 bs=4096 00:10:48.600 iodepth=1 00:10:48.600 norandommap=0 00:10:48.600 numjobs=1 00:10:48.600 00:10:48.859 verify_dump=1 00:10:48.859 verify_backlog=512 00:10:48.859 verify_state_save=0 00:10:48.859 do_verify=1 00:10:48.859 verify=crc32c-intel 00:10:48.859 [job0] 00:10:48.859 filename=/dev/nvme0n1 00:10:48.859 [job1] 00:10:48.859 filename=/dev/nvme0n2 00:10:48.859 [job2] 00:10:48.859 filename=/dev/nvme0n3 00:10:48.859 [job3] 00:10:48.859 filename=/dev/nvme0n4 00:10:48.859 Could not set queue depth (nvme0n1) 00:10:48.859 Could not set queue depth (nvme0n2) 00:10:48.859 Could not set queue depth (nvme0n3) 00:10:48.859 Could not set queue depth (nvme0n4) 00:10:48.859 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.859 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.859 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.859 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.859 fio-3.35 00:10:48.859 Starting 4 threads 00:10:50.231 00:10:50.231 job0: (groupid=0, jobs=1): err= 0: pid=143100: Fri Dec 6 19:09:35 2024 00:10:50.231 read: IOPS=276, BW=1105KiB/s (1132kB/s)(1112KiB/1006msec) 00:10:50.231 slat (nsec): min=7450, max=43854, avg=12309.50, stdev=7071.28 00:10:50.231 clat (usec): min=176, max=41008, avg=3166.35, stdev=10529.06 00:10:50.231 lat (usec): min=184, max=41031, avg=3178.66, stdev=10532.85 00:10:50.231 clat percentiles (usec): 00:10:50.231 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 200], 00:10:50.231 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 229], 60.00th=[ 247], 00:10:50.231 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 293], 95.00th=[41157], 00:10:50.231 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:50.231 | 99.99th=[41157] 00:10:50.231 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:10:50.231 slat (usec): min=9, max=15492, avg=41.84, stdev=684.18 00:10:50.231 clat (usec): min=151, max=1103, avg=191.37, stdev=63.52 00:10:50.231 lat (usec): min=161, max=15687, avg=233.21, stdev=687.26 00:10:50.231 clat percentiles (usec): 00:10:50.231 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:10:50.231 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 188], 00:10:50.231 | 70.00th=[ 192], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 239], 00:10:50.231 | 99.00th=[ 289], 99.50th=[ 783], 99.90th=[ 1106], 99.95th=[ 1106], 00:10:50.231 | 99.99th=[ 1106] 00:10:50.231 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:10:50.231 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:50.231 lat (usec) : 250=85.19%, 500=11.39%, 750=0.38%, 1000=0.25% 00:10:50.231 lat (msec) : 2=0.13%, 4=0.13%, 50=2.53% 00:10:50.231 cpu : usr=0.60%, sys=0.70%, ctx=794, majf=0, minf=1 00:10:50.231 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.231 issued rwts: total=278,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.231 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.231 job1: (groupid=0, jobs=1): err= 0: pid=143101: Fri Dec 6 19:09:35 2024 00:10:50.231 read: IOPS=437, BW=1751KiB/s (1793kB/s)(1772KiB/1012msec) 00:10:50.231 slat (nsec): min=5123, max=34565, avg=10834.50, stdev=5228.27 00:10:50.231 clat (usec): min=183, max=41215, avg=2002.18, stdev=8252.95 00:10:50.231 lat (usec): min=193, max=41222, avg=2013.02, stdev=8255.18 00:10:50.231 clat percentiles (usec): 00:10:50.231 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 217], 00:10:50.231 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 258], 00:10:50.231 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 334], 95.00th=[ 498], 00:10:50.231 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:50.231 | 99.99th=[41157] 00:10:50.231 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:10:50.231 slat (nsec): min=6105, max=49098, avg=11600.65, stdev=4583.77 00:10:50.231 clat (usec): min=132, max=484, avg=216.25, stdev=37.99 00:10:50.231 lat (usec): min=145, max=493, avg=227.85, stdev=37.25 00:10:50.231 clat percentiles (usec): 00:10:50.231 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 159], 20.00th=[ 180], 00:10:50.231 | 30.00th=[ 206], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 231], 00:10:50.231 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 253], 00:10:50.231 | 99.00th=[ 310], 99.50th=[ 396], 99.90th=[ 486], 99.95th=[ 486], 00:10:50.231 | 99.99th=[ 486] 00:10:50.231 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:10:50.231 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:50.231 lat (usec) : 250=72.04%, 500=25.65%, 750=0.31% 00:10:50.231 lat (msec) : 50=1.99% 00:10:50.231 cpu : usr=0.59%, sys=0.89%, ctx=958, majf=0, minf=2 00:10:50.231 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.231 issued rwts: total=443,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.231 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.231 job2: (groupid=0, jobs=1): err= 0: pid=143102: Fri Dec 6 19:09:35 2024 00:10:50.231 read: IOPS=1465, BW=5863KiB/s (6004kB/s)(6068KiB/1035msec) 00:10:50.231 slat (nsec): min=5709, max=53831, avg=15841.47, stdev=9175.89 00:10:50.231 clat (usec): min=187, max=41164, avg=443.86, stdev=2555.61 00:10:50.231 lat (usec): min=194, max=41180, avg=459.70, stdev=2555.78 00:10:50.231 clat percentiles (usec): 00:10:50.231 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:10:50.231 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 265], 00:10:50.231 | 70.00th=[ 277], 80.00th=[ 322], 90.00th=[ 437], 95.00th=[ 482], 00:10:50.231 | 99.00th=[ 603], 99.50th=[ 619], 99.90th=[41157], 99.95th=[41157], 00:10:50.231 | 99.99th=[41157] 00:10:50.231 write: IOPS=1484, BW=5936KiB/s (6079kB/s)(6144KiB/1035msec); 0 zone resets 00:10:50.231 slat (usec): min=7, max=978, avg=15.27, stdev=25.33 00:10:50.231 clat (usec): min=138, max=704, avg=195.58, stdev=49.39 00:10:50.231 lat (usec): min=150, max=1268, avg=210.86, stdev=55.62 00:10:50.231 clat percentiles (usec): 00:10:50.231 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 165], 00:10:50.231 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 190], 00:10:50.231 | 70.00th=[ 200], 80.00th=[ 210], 90.00th=[ 241], 95.00th=[ 310], 00:10:50.231 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 445], 99.95th=[ 701], 00:10:50.231 | 99.99th=[ 701] 00:10:50.231 bw ( KiB/s): min= 4096, max= 8192, per=51.75%, avg=6144.00, stdev=2896.31, samples=2 00:10:50.231 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:10:50.231 lat (usec) : 250=68.75%, 500=29.71%, 750=1.34% 00:10:50.231 lat (msec) : 50=0.20% 00:10:50.231 cpu : usr=2.42%, sys=4.64%, ctx=3055, majf=0, minf=1 00:10:50.231 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.231 issued rwts: total=1517,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.231 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.231 job3: (groupid=0, jobs=1): err= 0: pid=143103: Fri Dec 6 19:09:35 2024 00:10:50.231 read: IOPS=20, BW=82.9KiB/s (84.9kB/s)(84.0KiB/1013msec) 00:10:50.231 slat (nsec): min=11544, max=34571, avg=24831.90, stdev=7352.48 00:10:50.231 clat (usec): min=40830, max=41328, avg=40983.16, stdev=104.40 00:10:50.231 lat (usec): min=40865, max=41340, avg=41007.99, stdev=99.45 00:10:50.231 clat percentiles (usec): 00:10:50.231 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:50.231 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:50.231 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:50.231 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:50.231 | 99.99th=[41157] 00:10:50.232 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:10:50.232 slat (nsec): min=7618, max=56665, avg=12341.25, stdev=7590.33 00:10:50.232 clat (usec): min=152, max=647, avg=280.43, stdev=80.78 00:10:50.232 lat (usec): min=160, max=655, avg=292.77, stdev=80.78 00:10:50.232 clat percentiles (usec): 00:10:50.232 | 1.00th=[ 169], 5.00th=[ 192], 10.00th=[ 206], 20.00th=[ 221], 00:10:50.232 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 249], 60.00th=[ 269], 00:10:50.232 | 70.00th=[ 306], 80.00th=[ 379], 90.00th=[ 404], 95.00th=[ 416], 00:10:50.232 | 99.00th=[ 474], 99.50th=[ 619], 99.90th=[ 652], 99.95th=[ 652], 00:10:50.232 | 99.99th=[ 652] 00:10:50.232 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:10:50.232 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:50.232 lat (usec) : 250=48.97%, 500=46.53%, 750=0.56% 00:10:50.232 lat (msec) : 50=3.94% 00:10:50.232 cpu : usr=0.30%, sys=0.59%, ctx=535, majf=0, minf=1 00:10:50.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.232 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.232 00:10:50.232 Run status group 0 (all jobs): 00:10:50.232 READ: bw=8730KiB/s (8940kB/s), 82.9KiB/s-5863KiB/s (84.9kB/s-6004kB/s), io=9036KiB (9253kB), run=1006-1035msec 00:10:50.232 WRITE: bw=11.6MiB/s (12.2MB/s), 2022KiB/s-5936KiB/s (2070kB/s-6079kB/s), io=12.0MiB (12.6MB), run=1006-1035msec 00:10:50.232 00:10:50.232 Disk stats (read/write): 00:10:50.232 nvme0n1: ios=160/512, merge=0/0, ticks=1713/96, in_queue=1809, util=97.49% 00:10:50.232 nvme0n2: ios=293/512, merge=0/0, ticks=683/109, in_queue=792, util=86.43% 00:10:50.232 nvme0n3: ios=1573/1536, merge=0/0, ticks=683/301, in_queue=984, util=97.59% 00:10:50.232 nvme0n4: ios=73/512, merge=0/0, ticks=901/141, in_queue=1042, util=97.78% 00:10:50.232 19:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:50.232 [global] 00:10:50.232 thread=1 00:10:50.232 invalidate=1 00:10:50.232 rw=randwrite 00:10:50.232 time_based=1 00:10:50.232 runtime=1 00:10:50.232 ioengine=libaio 00:10:50.232 direct=1 00:10:50.232 bs=4096 00:10:50.232 iodepth=1 00:10:50.232 norandommap=0 00:10:50.232 numjobs=1 00:10:50.232 00:10:50.232 verify_dump=1 00:10:50.232 verify_backlog=512 00:10:50.232 verify_state_save=0 00:10:50.232 do_verify=1 00:10:50.232 verify=crc32c-intel 00:10:50.232 [job0] 00:10:50.232 filename=/dev/nvme0n1 00:10:50.232 [job1] 00:10:50.232 filename=/dev/nvme0n2 00:10:50.232 [job2] 00:10:50.232 filename=/dev/nvme0n3 00:10:50.232 [job3] 00:10:50.232 filename=/dev/nvme0n4 00:10:50.232 Could not set queue depth (nvme0n1) 00:10:50.232 Could not set queue depth (nvme0n2) 00:10:50.232 Could not set queue depth (nvme0n3) 00:10:50.232 Could not set queue depth (nvme0n4) 00:10:50.489 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.489 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.489 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.489 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.489 fio-3.35 00:10:50.489 Starting 4 threads 00:10:51.863 00:10:51.863 job0: (groupid=0, jobs=1): err= 0: pid=143334: Fri Dec 6 19:09:36 2024 00:10:51.863 read: IOPS=811, BW=3245KiB/s (3323kB/s)(3248KiB/1001msec) 00:10:51.863 slat (nsec): min=6910, max=52829, avg=10393.70, stdev=4780.22 00:10:51.863 clat (usec): min=169, max=41361, avg=938.26, stdev=5036.77 00:10:51.863 lat (usec): min=176, max=41370, avg=948.65, stdev=5038.17 00:10:51.863 clat percentiles (usec): 00:10:51.863 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 198], 20.00th=[ 210], 00:10:51.863 | 30.00th=[ 237], 40.00th=[ 269], 50.00th=[ 318], 60.00th=[ 359], 00:10:51.863 | 70.00th=[ 367], 80.00th=[ 371], 90.00th=[ 375], 95.00th=[ 388], 00:10:51.863 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:51.863 | 99.99th=[41157] 00:10:51.863 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:51.863 slat (nsec): min=8744, max=50831, avg=13835.39, stdev=6133.21 00:10:51.863 clat (usec): min=132, max=521, avg=204.15, stdev=60.30 00:10:51.863 lat (usec): min=141, max=540, avg=217.99, stdev=63.98 00:10:51.863 clat percentiles (usec): 00:10:51.863 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 151], 00:10:51.863 | 30.00th=[ 161], 40.00th=[ 176], 50.00th=[ 190], 60.00th=[ 206], 00:10:51.863 | 70.00th=[ 227], 80.00th=[ 247], 90.00th=[ 285], 95.00th=[ 330], 00:10:51.863 | 99.00th=[ 396], 99.50th=[ 412], 99.90th=[ 437], 99.95th=[ 523], 00:10:51.863 | 99.99th=[ 523] 00:10:51.863 bw ( KiB/s): min= 4096, max= 4096, per=23.64%, avg=4096.00, stdev= 0.00, samples=1 00:10:51.863 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:51.863 lat (usec) : 250=61.38%, 500=37.85%, 750=0.05% 00:10:51.863 lat (msec) : 50=0.71% 00:10:51.863 cpu : usr=1.50%, sys=3.10%, ctx=1837, majf=0, minf=1 00:10:51.863 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.863 issued rwts: total=812,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.863 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.863 job1: (groupid=0, jobs=1): err= 0: pid=143335: Fri Dec 6 19:09:36 2024 00:10:51.863 read: IOPS=918, BW=3672KiB/s (3760kB/s)(3676KiB/1001msec) 00:10:51.863 slat (nsec): min=5059, max=51165, avg=11127.00, stdev=7847.02 00:10:51.863 clat (usec): min=173, max=41174, avg=817.04, stdev=4618.90 00:10:51.863 lat (usec): min=179, max=41185, avg=828.17, stdev=4619.99 00:10:51.863 clat percentiles (usec): 00:10:51.863 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 208], 00:10:51.863 | 30.00th=[ 221], 40.00th=[ 239], 50.00th=[ 277], 60.00th=[ 318], 00:10:51.863 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 375], 95.00th=[ 383], 00:10:51.863 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:51.863 | 99.99th=[41157] 00:10:51.863 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:51.863 slat (nsec): min=6625, max=42935, avg=13216.82, stdev=4417.54 00:10:51.863 clat (usec): min=138, max=478, avg=213.46, stdev=56.80 00:10:51.863 lat (usec): min=146, max=502, avg=226.68, stdev=57.49 00:10:51.863 clat percentiles (usec): 00:10:51.863 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:10:51.863 | 30.00th=[ 174], 40.00th=[ 184], 50.00th=[ 200], 60.00th=[ 221], 00:10:51.863 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[ 343], 00:10:51.863 | 99.00th=[ 400], 99.50th=[ 437], 99.90th=[ 478], 99.95th=[ 478], 00:10:51.863 | 99.99th=[ 478] 00:10:51.863 bw ( KiB/s): min= 4096, max= 4096, per=23.64%, avg=4096.00, stdev= 0.00, samples=1 00:10:51.863 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:51.863 lat (usec) : 250=65.11%, 500=34.12%, 750=0.05%, 1000=0.10% 00:10:51.863 lat (msec) : 50=0.62% 00:10:51.863 cpu : usr=0.90%, sys=2.80%, ctx=1943, majf=0, minf=2 00:10:51.863 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.863 issued rwts: total=919,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.863 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.863 job2: (groupid=0, jobs=1): err= 0: pid=143336: Fri Dec 6 19:09:36 2024 00:10:51.863 read: IOPS=29, BW=120KiB/s (122kB/s)(124KiB/1037msec) 00:10:51.863 slat (nsec): min=8175, max=57170, avg=23251.77, stdev=13086.77 00:10:51.863 clat (usec): min=267, max=41833, avg=28990.38, stdev=18633.88 00:10:51.863 lat (usec): min=277, max=41890, avg=29013.63, stdev=18637.05 00:10:51.863 clat percentiles (usec): 00:10:51.863 | 1.00th=[ 269], 5.00th=[ 310], 10.00th=[ 330], 20.00th=[ 400], 00:10:51.863 | 30.00th=[34341], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:51.863 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:51.863 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:51.863 | 99.99th=[41681] 00:10:51.863 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:10:51.863 slat (nsec): min=8422, max=56497, avg=14814.92, stdev=7556.63 00:10:51.863 clat (usec): min=165, max=482, avg=249.63, stdev=56.97 00:10:51.863 lat (usec): min=178, max=522, avg=264.45, stdev=58.49 00:10:51.863 clat percentiles (usec): 00:10:51.863 | 1.00th=[ 172], 5.00th=[ 188], 10.00th=[ 200], 20.00th=[ 212], 00:10:51.863 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 247], 00:10:51.863 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 310], 95.00th=[ 388], 00:10:51.863 | 99.00th=[ 469], 99.50th=[ 474], 99.90th=[ 482], 99.95th=[ 482], 00:10:51.863 | 99.99th=[ 482] 00:10:51.863 bw ( KiB/s): min= 4096, max= 4096, per=23.64%, avg=4096.00, stdev= 0.00, samples=1 00:10:51.863 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:51.863 lat (usec) : 250=60.41%, 500=35.36%, 1000=0.18% 00:10:51.863 lat (msec) : 50=4.05% 00:10:51.863 cpu : usr=0.29%, sys=0.87%, ctx=544, majf=0, minf=1 00:10:51.863 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.863 issued rwts: total=31,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.863 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.863 job3: (groupid=0, jobs=1): err= 0: pid=143337: Fri Dec 6 19:09:36 2024 00:10:51.863 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:51.863 slat (nsec): min=8024, max=51292, avg=14362.22, stdev=5928.49 00:10:51.863 clat (usec): min=199, max=40970, avg=338.94, stdev=1041.99 00:10:51.863 lat (usec): min=209, max=40979, avg=353.31, stdev=1042.15 00:10:51.863 clat percentiles (usec): 00:10:51.863 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 243], 00:10:51.863 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 281], 00:10:51.863 | 70.00th=[ 318], 80.00th=[ 433], 90.00th=[ 486], 95.00th=[ 506], 00:10:51.863 | 99.00th=[ 545], 99.50th=[ 553], 99.90th=[ 889], 99.95th=[41157], 00:10:51.864 | 99.99th=[41157] 00:10:51.864 write: IOPS=1929, BW=7716KiB/s (7901kB/s)(7724KiB/1001msec); 0 zone resets 00:10:51.864 slat (nsec): min=8534, max=98333, avg=16847.36, stdev=7758.33 00:10:51.864 clat (usec): min=133, max=1071, avg=211.99, stdev=52.29 00:10:51.864 lat (usec): min=144, max=1081, avg=228.84, stdev=55.40 00:10:51.864 clat percentiles (usec): 00:10:51.864 | 1.00th=[ 145], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 178], 00:10:51.864 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 204], 60.00th=[ 217], 00:10:51.864 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 258], 95.00th=[ 285], 00:10:51.864 | 99.00th=[ 351], 99.50th=[ 396], 99.90th=[ 938], 99.95th=[ 1074], 00:10:51.864 | 99.99th=[ 1074] 00:10:51.864 bw ( KiB/s): min= 8192, max= 8192, per=47.29%, avg=8192.00, stdev= 0.00, samples=1 00:10:51.864 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:51.864 lat (usec) : 250=63.17%, 500=34.09%, 750=2.62%, 1000=0.06% 00:10:51.864 lat (msec) : 2=0.03%, 50=0.03% 00:10:51.864 cpu : usr=4.30%, sys=6.50%, ctx=3468, majf=0, minf=1 00:10:51.864 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.864 issued rwts: total=1536,1931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.864 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.864 00:10:51.864 Run status group 0 (all jobs): 00:10:51.864 READ: bw=12.4MiB/s (13.0MB/s), 120KiB/s-6138KiB/s (122kB/s-6285kB/s), io=12.9MiB (13.5MB), run=1001-1037msec 00:10:51.864 WRITE: bw=16.9MiB/s (17.7MB/s), 1975KiB/s-7716KiB/s (2022kB/s-7901kB/s), io=17.5MiB (18.4MB), run=1001-1037msec 00:10:51.864 00:10:51.864 Disk stats (read/write): 00:10:51.864 nvme0n1: ios=564/797, merge=0/0, ticks=1653/173, in_queue=1826, util=98.50% 00:10:51.864 nvme0n2: ios=651/1024, merge=0/0, ticks=572/220, in_queue=792, util=86.79% 00:10:51.864 nvme0n3: ios=84/512, merge=0/0, ticks=889/127, in_queue=1016, util=98.12% 00:10:51.864 nvme0n4: ios=1325/1536, merge=0/0, ticks=1099/322, in_queue=1421, util=98.01% 00:10:51.864 19:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:51.864 [global] 00:10:51.864 thread=1 00:10:51.864 invalidate=1 00:10:51.864 rw=write 00:10:51.864 time_based=1 00:10:51.864 runtime=1 00:10:51.864 ioengine=libaio 00:10:51.864 direct=1 00:10:51.864 bs=4096 00:10:51.864 iodepth=128 00:10:51.864 norandommap=0 00:10:51.864 numjobs=1 00:10:51.864 00:10:51.864 verify_dump=1 00:10:51.864 verify_backlog=512 00:10:51.864 verify_state_save=0 00:10:51.864 do_verify=1 00:10:51.864 verify=crc32c-intel 00:10:51.864 [job0] 00:10:51.864 filename=/dev/nvme0n1 00:10:51.864 [job1] 00:10:51.864 filename=/dev/nvme0n2 00:10:51.864 [job2] 00:10:51.864 filename=/dev/nvme0n3 00:10:51.864 [job3] 00:10:51.864 filename=/dev/nvme0n4 00:10:51.864 Could not set queue depth (nvme0n1) 00:10:51.864 Could not set queue depth (nvme0n2) 00:10:51.864 Could not set queue depth (nvme0n3) 00:10:51.864 Could not set queue depth (nvme0n4) 00:10:51.864 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.864 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.864 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.864 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.864 fio-3.35 00:10:51.864 Starting 4 threads 00:10:53.240 00:10:53.240 job0: (groupid=0, jobs=1): err= 0: pid=143563: Fri Dec 6 19:09:38 2024 00:10:53.240 read: IOPS=3715, BW=14.5MiB/s (15.2MB/s)(15.2MiB/1048msec) 00:10:53.240 slat (usec): min=3, max=11323, avg=109.04, stdev=739.77 00:10:53.240 clat (usec): min=5575, max=58313, avg=15156.29, stdev=8746.87 00:10:53.240 lat (usec): min=5590, max=58327, avg=15265.33, stdev=8778.93 00:10:53.240 clat percentiles (usec): 00:10:53.240 | 1.00th=[ 7046], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10552], 00:10:53.241 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12387], 60.00th=[13435], 00:10:53.241 | 70.00th=[14746], 80.00th=[16712], 90.00th=[20841], 95.00th=[30540], 00:10:53.241 | 99.00th=[57410], 99.50th=[57934], 99.90th=[58459], 99.95th=[58459], 00:10:53.241 | 99.99th=[58459] 00:10:53.241 write: IOPS=3908, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1048msec); 0 zone resets 00:10:53.241 slat (usec): min=4, max=10982, avg=126.62, stdev=666.44 00:10:53.241 clat (usec): min=1124, max=52174, avg=18014.06, stdev=10048.77 00:10:53.241 lat (usec): min=1149, max=52183, avg=18140.68, stdev=10123.19 00:10:53.241 clat percentiles (usec): 00:10:53.241 | 1.00th=[ 4293], 5.00th=[ 7635], 10.00th=[ 9896], 20.00th=[10683], 00:10:53.241 | 30.00th=[11207], 40.00th=[11994], 50.00th=[13304], 60.00th=[18744], 00:10:53.241 | 70.00th=[20841], 80.00th=[24773], 90.00th=[32375], 95.00th=[40633], 00:10:53.241 | 99.00th=[47973], 99.50th=[51643], 99.90th=[52167], 99.95th=[52167], 00:10:53.241 | 99.99th=[52167] 00:10:53.241 bw ( KiB/s): min=14672, max=18132, per=26.65%, avg=16402.00, stdev=2446.59, samples=2 00:10:53.241 iops : min= 3668, max= 4533, avg=4100.50, stdev=611.65, samples=2 00:10:53.241 lat (msec) : 2=0.18%, 4=0.18%, 10=13.02%, 20=65.51%, 50=19.80% 00:10:53.241 lat (msec) : 100=1.33% 00:10:53.241 cpu : usr=4.30%, sys=8.21%, ctx=379, majf=0, minf=1 00:10:53.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:53.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.241 issued rwts: total=3894,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.241 job1: (groupid=0, jobs=1): err= 0: pid=143564: Fri Dec 6 19:09:38 2024 00:10:53.241 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:10:53.241 slat (usec): min=2, max=19478, avg=99.91, stdev=649.51 00:10:53.241 clat (usec): min=3058, max=38391, avg=12985.74, stdev=5082.55 00:10:53.241 lat (usec): min=3694, max=38407, avg=13085.65, stdev=5120.26 00:10:53.241 clat percentiles (usec): 00:10:53.241 | 1.00th=[ 6325], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9765], 00:10:53.241 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11600], 60.00th=[12256], 00:10:53.241 | 70.00th=[13304], 80.00th=[14877], 90.00th=[18744], 95.00th=[23200], 00:10:53.241 | 99.00th=[33817], 99.50th=[33817], 99.90th=[35390], 99.95th=[37487], 00:10:53.241 | 99.99th=[38536] 00:10:53.241 write: IOPS=5144, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1004msec); 0 zone resets 00:10:53.241 slat (usec): min=4, max=9942, avg=79.08, stdev=404.16 00:10:53.241 clat (usec): min=246, max=34826, avg=11783.04, stdev=4128.30 00:10:53.241 lat (usec): min=398, max=34837, avg=11862.11, stdev=4164.06 00:10:53.241 clat percentiles (usec): 00:10:53.241 | 1.00th=[ 3195], 5.00th=[ 7701], 10.00th=[ 9110], 20.00th=[ 9765], 00:10:53.241 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11731], 00:10:53.241 | 70.00th=[11994], 80.00th=[12649], 90.00th=[13829], 95.00th=[17433], 00:10:53.241 | 99.00th=[31589], 99.50th=[33817], 99.90th=[34866], 99.95th=[34866], 00:10:53.241 | 99.99th=[34866] 00:10:53.241 bw ( KiB/s): min=19552, max=21408, per=33.27%, avg=20480.00, stdev=1312.39, samples=2 00:10:53.241 iops : min= 4888, max= 5352, avg=5120.00, stdev=328.10, samples=2 00:10:53.241 lat (usec) : 250=0.01%, 500=0.03% 00:10:53.241 lat (msec) : 2=0.20%, 4=0.84%, 10=21.88%, 20=70.91%, 50=6.14% 00:10:53.241 cpu : usr=5.28%, sys=10.87%, ctx=617, majf=0, minf=1 00:10:53.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:53.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.241 issued rwts: total=5120,5165,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.241 job2: (groupid=0, jobs=1): err= 0: pid=143565: Fri Dec 6 19:09:38 2024 00:10:53.241 read: IOPS=2601, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1009msec) 00:10:53.241 slat (usec): min=2, max=17304, avg=156.63, stdev=957.05 00:10:53.241 clat (usec): min=3168, max=52347, avg=18249.32, stdev=7672.51 00:10:53.241 lat (usec): min=8296, max=57998, avg=18405.95, stdev=7747.19 00:10:53.241 clat percentiles (usec): 00:10:53.241 | 1.00th=[ 9765], 5.00th=[10814], 10.00th=[11994], 20.00th=[12256], 00:10:53.241 | 30.00th=[13566], 40.00th=[14484], 50.00th=[15664], 60.00th=[16581], 00:10:53.241 | 70.00th=[20579], 80.00th=[22152], 90.00th=[28967], 95.00th=[35390], 00:10:53.241 | 99.00th=[41681], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:10:53.241 | 99.99th=[52167] 00:10:53.241 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:10:53.241 slat (usec): min=3, max=13539, avg=185.66, stdev=1020.27 00:10:53.241 clat (msec): min=7, max=129, avg=25.92, stdev=21.25 00:10:53.241 lat (msec): min=7, max=129, avg=26.10, stdev=21.39 00:10:53.241 clat percentiles (msec): 00:10:53.241 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:10:53.241 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 21], 60.00th=[ 22], 00:10:53.241 | 70.00th=[ 23], 80.00th=[ 33], 90.00th=[ 51], 95.00th=[ 59], 00:10:53.241 | 99.00th=[ 117], 99.50th=[ 124], 99.90th=[ 130], 99.95th=[ 130], 00:10:53.241 | 99.99th=[ 130] 00:10:53.241 bw ( KiB/s): min=11720, max=12352, per=19.55%, avg=12036.00, stdev=446.89, samples=2 00:10:53.241 iops : min= 2930, max= 3088, avg=3009.00, stdev=111.72, samples=2 00:10:53.241 lat (msec) : 4=0.02%, 10=1.90%, 20=56.19%, 50=35.98%, 100=4.27% 00:10:53.241 lat (msec) : 250=1.65% 00:10:53.241 cpu : usr=2.08%, sys=5.26%, ctx=243, majf=0, minf=1 00:10:53.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:53.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.241 issued rwts: total=2625,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.241 job3: (groupid=0, jobs=1): err= 0: pid=143566: Fri Dec 6 19:09:38 2024 00:10:53.241 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:10:53.241 slat (usec): min=2, max=16970, avg=132.92, stdev=784.87 00:10:53.241 clat (usec): min=8518, max=53818, avg=16824.96, stdev=8147.25 00:10:53.241 lat (usec): min=8857, max=53824, avg=16957.88, stdev=8194.43 00:10:53.241 clat percentiles (usec): 00:10:53.241 | 1.00th=[10028], 5.00th=[10683], 10.00th=[11338], 20.00th=[12256], 00:10:53.241 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13435], 60.00th=[14615], 00:10:53.241 | 70.00th=[16909], 80.00th=[19530], 90.00th=[26608], 95.00th=[34341], 00:10:53.241 | 99.00th=[51643], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:10:53.241 | 99.99th=[53740] 00:10:53.241 write: IOPS=3760, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1009msec); 0 zone resets 00:10:53.241 slat (usec): min=3, max=12783, avg=129.37, stdev=689.03 00:10:53.241 clat (usec): min=3523, max=58655, avg=17692.12, stdev=9307.32 00:10:53.241 lat (usec): min=8716, max=58659, avg=17821.49, stdev=9354.02 00:10:53.241 clat percentiles (usec): 00:10:53.241 | 1.00th=[ 9110], 5.00th=[10421], 10.00th=[10945], 20.00th=[11863], 00:10:53.241 | 30.00th=[12125], 40.00th=[12911], 50.00th=[13698], 60.00th=[15008], 00:10:53.241 | 70.00th=[17957], 80.00th=[21627], 90.00th=[32375], 95.00th=[39060], 00:10:53.241 | 99.00th=[52167], 99.50th=[53216], 99.90th=[58459], 99.95th=[58459], 00:10:53.241 | 99.99th=[58459] 00:10:53.241 bw ( KiB/s): min=10280, max=19048, per=23.82%, avg=14664.00, stdev=6199.91, samples=2 00:10:53.241 iops : min= 2570, max= 4762, avg=3666.00, stdev=1549.98, samples=2 00:10:53.241 lat (msec) : 4=0.01%, 10=1.91%, 20=79.03%, 50=18.01%, 100=1.03% 00:10:53.241 cpu : usr=3.47%, sys=7.94%, ctx=372, majf=0, minf=1 00:10:53.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:53.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.241 issued rwts: total=3584,3794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.241 00:10:53.241 Run status group 0 (all jobs): 00:10:53.241 READ: bw=56.7MiB/s (59.5MB/s), 10.2MiB/s-19.9MiB/s (10.7MB/s-20.9MB/s), io=59.5MiB (62.4MB), run=1004-1048msec 00:10:53.241 WRITE: bw=60.1MiB/s (63.0MB/s), 11.9MiB/s-20.1MiB/s (12.5MB/s-21.1MB/s), io=63.0MiB (66.1MB), run=1004-1048msec 00:10:53.241 00:10:53.241 Disk stats (read/write): 00:10:53.241 nvme0n1: ios=3611/3591, merge=0/0, ticks=43318/55418, in_queue=98736, util=99.40% 00:10:53.241 nvme0n2: ios=4314/4608, merge=0/0, ticks=29494/28374, in_queue=57868, util=98.37% 00:10:53.241 nvme0n3: ios=2250/2560, merge=0/0, ticks=17664/31932, in_queue=49596, util=98.96% 00:10:53.241 nvme0n4: ios=3107/3525, merge=0/0, ticks=14908/16335, in_queue=31243, util=100.00% 00:10:53.241 19:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:53.242 [global] 00:10:53.242 thread=1 00:10:53.242 invalidate=1 00:10:53.242 rw=randwrite 00:10:53.242 time_based=1 00:10:53.242 runtime=1 00:10:53.242 ioengine=libaio 00:10:53.242 direct=1 00:10:53.242 bs=4096 00:10:53.242 iodepth=128 00:10:53.242 norandommap=0 00:10:53.242 numjobs=1 00:10:53.242 00:10:53.242 verify_dump=1 00:10:53.242 verify_backlog=512 00:10:53.242 verify_state_save=0 00:10:53.242 do_verify=1 00:10:53.242 verify=crc32c-intel 00:10:53.242 [job0] 00:10:53.242 filename=/dev/nvme0n1 00:10:53.242 [job1] 00:10:53.242 filename=/dev/nvme0n2 00:10:53.242 [job2] 00:10:53.242 filename=/dev/nvme0n3 00:10:53.242 [job3] 00:10:53.242 filename=/dev/nvme0n4 00:10:53.242 Could not set queue depth (nvme0n1) 00:10:53.242 Could not set queue depth (nvme0n2) 00:10:53.242 Could not set queue depth (nvme0n3) 00:10:53.242 Could not set queue depth (nvme0n4) 00:10:53.500 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:53.500 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:53.500 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:53.500 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:53.500 fio-3.35 00:10:53.500 Starting 4 threads 00:10:54.875 00:10:54.875 job0: (groupid=0, jobs=1): err= 0: pid=143916: Fri Dec 6 19:09:39 2024 00:10:54.875 read: IOPS=3994, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1003msec) 00:10:54.875 slat (usec): min=2, max=10706, avg=134.74, stdev=795.11 00:10:54.875 clat (usec): min=570, max=41488, avg=17119.22, stdev=8422.77 00:10:54.875 lat (usec): min=2862, max=41493, avg=17253.96, stdev=8463.99 00:10:54.875 clat percentiles (usec): 00:10:54.875 | 1.00th=[ 4883], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10683], 00:10:54.875 | 30.00th=[11600], 40.00th=[12387], 50.00th=[13042], 60.00th=[14615], 00:10:54.875 | 70.00th=[21103], 80.00th=[25560], 90.00th=[28705], 95.00th=[34866], 00:10:54.875 | 99.00th=[39584], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:10:54.875 | 99.99th=[41681] 00:10:54.875 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:10:54.875 slat (usec): min=3, max=8727, avg=104.45, stdev=612.58 00:10:54.875 clat (usec): min=1193, max=32364, avg=14320.76, stdev=5473.50 00:10:54.875 lat (usec): min=1218, max=32368, avg=14425.21, stdev=5499.00 00:10:54.875 clat percentiles (usec): 00:10:54.875 | 1.00th=[ 5080], 5.00th=[ 7701], 10.00th=[ 9241], 20.00th=[10683], 00:10:54.875 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12256], 60.00th=[12649], 00:10:54.875 | 70.00th=[16057], 80.00th=[20317], 90.00th=[23462], 95.00th=[25297], 00:10:54.875 | 99.00th=[27919], 99.50th=[29492], 99.90th=[32375], 99.95th=[32375], 00:10:54.875 | 99.99th=[32375] 00:10:54.875 bw ( KiB/s): min=12288, max=20521, per=25.69%, avg=16404.50, stdev=5821.61, samples=2 00:10:54.875 iops : min= 3072, max= 5130, avg=4101.00, stdev=1455.23, samples=2 00:10:54.876 lat (usec) : 750=0.01% 00:10:54.876 lat (msec) : 2=0.09%, 4=0.57%, 10=11.53%, 20=61.96%, 50=25.85% 00:10:54.876 cpu : usr=4.19%, sys=6.29%, ctx=408, majf=0, minf=2 00:10:54.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:54.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.876 issued rwts: total=4006,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.876 job1: (groupid=0, jobs=1): err= 0: pid=143917: Fri Dec 6 19:09:39 2024 00:10:54.876 read: IOPS=3648, BW=14.2MiB/s (14.9MB/s)(14.9MiB/1046msec) 00:10:54.876 slat (usec): min=2, max=15225, avg=142.75, stdev=896.69 00:10:54.876 clat (usec): min=8392, max=70678, avg=19856.96, stdev=12461.02 00:10:54.876 lat (usec): min=8407, max=70682, avg=19999.71, stdev=12516.70 00:10:54.876 clat percentiles (usec): 00:10:54.876 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10945], 20.00th=[11600], 00:10:54.876 | 30.00th=[11863], 40.00th=[12780], 50.00th=[14091], 60.00th=[17171], 00:10:54.876 | 70.00th=[21103], 80.00th=[26346], 90.00th=[34866], 95.00th=[48497], 00:10:54.876 | 99.00th=[66847], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:10:54.876 | 99.99th=[70779] 00:10:54.876 write: IOPS=3915, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1046msec); 0 zone resets 00:10:54.876 slat (usec): min=4, max=7812, avg=103.62, stdev=563.58 00:10:54.876 clat (usec): min=6099, max=29805, avg=13752.63, stdev=4207.24 00:10:54.876 lat (usec): min=6105, max=29814, avg=13856.25, stdev=4235.73 00:10:54.876 clat percentiles (usec): 00:10:54.876 | 1.00th=[ 6587], 5.00th=[ 8848], 10.00th=[10159], 20.00th=[11076], 00:10:54.876 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11994], 60.00th=[13304], 00:10:54.876 | 70.00th=[14353], 80.00th=[17171], 90.00th=[19792], 95.00th=[22938], 00:10:54.876 | 99.00th=[27132], 99.50th=[28967], 99.90th=[29754], 99.95th=[29754], 00:10:54.876 | 99.99th=[29754] 00:10:54.876 bw ( KiB/s): min=16384, max=16384, per=25.66%, avg=16384.00, stdev= 0.00, samples=2 00:10:54.876 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:10:54.876 lat (msec) : 10=7.79%, 20=73.00%, 50=16.97%, 100=2.24% 00:10:54.876 cpu : usr=3.92%, sys=6.12%, ctx=341, majf=0, minf=1 00:10:54.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:54.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.876 issued rwts: total=3816,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.876 job2: (groupid=0, jobs=1): err= 0: pid=143918: Fri Dec 6 19:09:39 2024 00:10:54.876 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:10:54.876 slat (usec): min=2, max=10704, avg=136.60, stdev=773.06 00:10:54.876 clat (usec): min=10134, max=28823, avg=17645.91, stdev=3833.28 00:10:54.876 lat (usec): min=10178, max=29364, avg=17782.52, stdev=3877.51 00:10:54.876 clat percentiles (usec): 00:10:54.876 | 1.00th=[11338], 5.00th=[12387], 10.00th=[12518], 20.00th=[13698], 00:10:54.876 | 30.00th=[15008], 40.00th=[16450], 50.00th=[17695], 60.00th=[18744], 00:10:54.876 | 70.00th=[19530], 80.00th=[20841], 90.00th=[23200], 95.00th=[24249], 00:10:54.876 | 99.00th=[25822], 99.50th=[27132], 99.90th=[28705], 99.95th=[28705], 00:10:54.876 | 99.99th=[28705] 00:10:54.876 write: IOPS=4004, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1005msec); 0 zone resets 00:10:54.876 slat (usec): min=3, max=9638, avg=119.99, stdev=692.58 00:10:54.876 clat (usec): min=2432, max=28154, avg=15757.77, stdev=3827.07 00:10:54.876 lat (usec): min=6251, max=28164, avg=15877.76, stdev=3864.59 00:10:54.876 clat percentiles (usec): 00:10:54.876 | 1.00th=[ 6521], 5.00th=[10683], 10.00th=[12387], 20.00th=[12911], 00:10:54.876 | 30.00th=[13435], 40.00th=[14091], 50.00th=[14877], 60.00th=[16319], 00:10:54.876 | 70.00th=[17171], 80.00th=[18220], 90.00th=[21365], 95.00th=[24511], 00:10:54.876 | 99.00th=[27395], 99.50th=[27657], 99.90th=[28181], 99.95th=[28181], 00:10:54.876 | 99.99th=[28181] 00:10:54.876 bw ( KiB/s): min=14792, max=16384, per=24.41%, avg=15588.00, stdev=1125.71, samples=2 00:10:54.876 iops : min= 3698, max= 4096, avg=3897.00, stdev=281.43, samples=2 00:10:54.876 lat (msec) : 4=0.01%, 10=1.13%, 20=80.38%, 50=18.48% 00:10:54.876 cpu : usr=3.39%, sys=5.08%, ctx=319, majf=0, minf=1 00:10:54.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:54.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.876 issued rwts: total=3584,4025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.876 job3: (groupid=0, jobs=1): err= 0: pid=143919: Fri Dec 6 19:09:39 2024 00:10:54.876 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:10:54.876 slat (usec): min=2, max=13159, avg=110.62, stdev=656.23 00:10:54.876 clat (usec): min=7547, max=40301, avg=14102.63, stdev=4269.43 00:10:54.876 lat (usec): min=7554, max=40313, avg=14213.25, stdev=4306.69 00:10:54.876 clat percentiles (usec): 00:10:54.876 | 1.00th=[ 8717], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[11731], 00:10:54.876 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[13042], 00:10:54.876 | 70.00th=[14222], 80.00th=[15795], 90.00th=[20317], 95.00th=[23462], 00:10:54.876 | 99.00th=[25035], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:10:54.876 | 99.99th=[40109] 00:10:54.876 write: IOPS=4452, BW=17.4MiB/s (18.2MB/s)(17.5MiB/1006msec); 0 zone resets 00:10:54.876 slat (usec): min=3, max=9736, avg=112.95, stdev=718.49 00:10:54.876 clat (usec): min=4925, max=46121, avg=15258.58, stdev=7480.79 00:10:54.876 lat (usec): min=5631, max=46126, avg=15371.53, stdev=7526.41 00:10:54.876 clat percentiles (usec): 00:10:54.876 | 1.00th=[ 7373], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[11469], 00:10:54.876 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12518], 60.00th=[13173], 00:10:54.876 | 70.00th=[14484], 80.00th=[16057], 90.00th=[24249], 95.00th=[36963], 00:10:54.876 | 99.00th=[42730], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:10:54.876 | 99.99th=[45876] 00:10:54.876 bw ( KiB/s): min=13768, max=21048, per=27.27%, avg=17408.00, stdev=5147.74, samples=2 00:10:54.876 iops : min= 3442, max= 5262, avg=4352.00, stdev=1286.93, samples=2 00:10:54.876 lat (msec) : 10=6.15%, 20=82.61%, 50=11.24% 00:10:54.876 cpu : usr=4.58%, sys=7.96%, ctx=362, majf=0, minf=1 00:10:54.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:54.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.876 issued rwts: total=4096,4479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.876 00:10:54.876 Run status group 0 (all jobs): 00:10:54.876 READ: bw=57.9MiB/s (60.7MB/s), 13.9MiB/s-15.9MiB/s (14.6MB/s-16.7MB/s), io=60.6MiB (63.5MB), run=1003-1046msec 00:10:54.876 WRITE: bw=62.4MiB/s (65.4MB/s), 15.3MiB/s-17.4MiB/s (16.0MB/s-18.2MB/s), io=65.2MiB (68.4MB), run=1003-1046msec 00:10:54.876 00:10:54.876 Disk stats (read/write): 00:10:54.876 nvme0n1: ios=3122/3470, merge=0/0, ticks=30080/28386, in_queue=58466, util=86.67% 00:10:54.876 nvme0n2: ios=3479/3584, merge=0/0, ticks=19051/13374, in_queue=32425, util=98.17% 00:10:54.876 nvme0n3: ios=3128/3474, merge=0/0, ticks=19235/19902, in_queue=39137, util=98.02% 00:10:54.876 nvme0n4: ios=3447/3584, merge=0/0, ticks=22734/22548, in_queue=45282, util=97.17% 00:10:54.876 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:54.876 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=144059 00:10:54.876 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:54.876 19:09:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:54.876 [global] 00:10:54.876 thread=1 00:10:54.876 invalidate=1 00:10:54.876 rw=read 00:10:54.876 time_based=1 00:10:54.876 runtime=10 00:10:54.876 ioengine=libaio 00:10:54.876 direct=1 00:10:54.876 bs=4096 00:10:54.876 iodepth=1 00:10:54.876 norandommap=1 00:10:54.876 numjobs=1 00:10:54.876 00:10:54.876 [job0] 00:10:54.876 filename=/dev/nvme0n1 00:10:54.876 [job1] 00:10:54.876 filename=/dev/nvme0n2 00:10:54.876 [job2] 00:10:54.876 filename=/dev/nvme0n3 00:10:54.876 [job3] 00:10:54.876 filename=/dev/nvme0n4 00:10:54.876 Could not set queue depth (nvme0n1) 00:10:54.876 Could not set queue depth (nvme0n2) 00:10:54.876 Could not set queue depth (nvme0n3) 00:10:54.876 Could not set queue depth (nvme0n4) 00:10:54.876 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.876 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.876 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.876 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.876 fio-3.35 00:10:54.876 Starting 4 threads 00:10:58.162 19:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:58.162 19:09:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:58.162 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=35463168, buflen=4096 00:10:58.162 fio: pid=144151, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:58.162 19:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.162 19:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:58.162 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=38535168, buflen=4096 00:10:58.162 fio: pid=144150, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:58.728 19:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.728 19:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:58.728 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=9457664, buflen=4096 00:10:58.728 fio: pid=144148, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:58.728 19:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.728 19:09:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:58.986 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=46673920, buflen=4096 00:10:58.986 fio: pid=144149, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:58.986 00:10:58.986 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=144148: Fri Dec 6 19:09:43 2024 00:10:58.986 read: IOPS=649, BW=2596KiB/s (2658kB/s)(9236KiB/3558msec) 00:10:58.986 slat (usec): min=4, max=12914, avg=21.19, stdev=314.45 00:10:58.986 clat (usec): min=177, max=42038, avg=1506.46, stdev=6939.40 00:10:58.986 lat (usec): min=182, max=54002, avg=1527.65, stdev=6978.89 00:10:58.986 clat percentiles (usec): 00:10:58.986 | 1.00th=[ 190], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 219], 00:10:58.986 | 30.00th=[ 233], 40.00th=[ 258], 50.00th=[ 289], 60.00th=[ 314], 00:10:58.986 | 70.00th=[ 338], 80.00th=[ 363], 90.00th=[ 396], 95.00th=[ 429], 00:10:58.986 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:58.986 | 99.99th=[42206] 00:10:58.986 bw ( KiB/s): min= 96, max=12152, per=6.83%, avg=2244.00, stdev=4858.30, samples=6 00:10:58.986 iops : min= 24, max= 3038, avg=561.00, stdev=1214.57, samples=6 00:10:58.986 lat (usec) : 250=37.97%, 500=58.96%, 750=0.04% 00:10:58.986 lat (msec) : 50=2.99% 00:10:58.986 cpu : usr=0.31%, sys=0.96%, ctx=2313, majf=0, minf=2 00:10:58.986 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.986 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.986 issued rwts: total=2310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.986 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.986 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=144149: Fri Dec 6 19:09:43 2024 00:10:58.986 read: IOPS=2947, BW=11.5MiB/s (12.1MB/s)(44.5MiB/3866msec) 00:10:58.986 slat (usec): min=4, max=8819, avg=15.64, stdev=160.31 00:10:58.986 clat (usec): min=163, max=41191, avg=318.55, stdev=1516.94 00:10:58.986 lat (usec): min=169, max=49864, avg=333.61, stdev=1560.69 00:10:58.986 clat percentiles (usec): 00:10:58.986 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 202], 00:10:58.986 | 30.00th=[ 212], 40.00th=[ 223], 50.00th=[ 241], 60.00th=[ 262], 00:10:58.986 | 70.00th=[ 285], 80.00th=[ 310], 90.00th=[ 347], 95.00th=[ 388], 00:10:58.986 | 99.00th=[ 494], 99.50th=[ 570], 99.90th=[41157], 99.95th=[41157], 00:10:58.986 | 99.99th=[41157] 00:10:58.986 bw ( KiB/s): min= 9888, max=16536, per=38.71%, avg=12724.86, stdev=2608.81, samples=7 00:10:58.986 iops : min= 2472, max= 4134, avg=3181.14, stdev=652.29, samples=7 00:10:58.986 lat (usec) : 250=55.13%, 500=43.92%, 750=0.68%, 1000=0.05% 00:10:58.986 lat (msec) : 2=0.04%, 10=0.01%, 20=0.02%, 50=0.14% 00:10:58.986 cpu : usr=1.68%, sys=5.59%, ctx=11403, majf=0, minf=2 00:10:58.986 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.986 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.986 issued rwts: total=11396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.986 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.987 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=144150: Fri Dec 6 19:09:43 2024 00:10:58.987 read: IOPS=2880, BW=11.2MiB/s (11.8MB/s)(36.8MiB/3267msec) 00:10:58.987 slat (usec): min=5, max=11682, avg=13.47, stdev=135.94 00:10:58.987 clat (usec): min=177, max=42033, avg=328.11, stdev=1482.09 00:10:58.987 lat (usec): min=184, max=42045, avg=341.57, stdev=1488.66 00:10:58.987 clat percentiles (usec): 00:10:58.987 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:10:58.987 | 30.00th=[ 231], 40.00th=[ 241], 50.00th=[ 258], 60.00th=[ 281], 00:10:58.987 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 343], 95.00th=[ 383], 00:10:58.987 | 99.00th=[ 506], 99.50th=[ 537], 99.90th=[41157], 99.95th=[41157], 00:10:58.987 | 99.99th=[42206] 00:10:58.987 bw ( KiB/s): min= 2448, max=16176, per=34.30%, avg=11276.00, stdev=5042.62, samples=6 00:10:58.987 iops : min= 612, max= 4044, avg=2819.00, stdev=1260.66, samples=6 00:10:58.987 lat (usec) : 250=46.46%, 500=52.29%, 750=1.06%, 1000=0.01% 00:10:58.987 lat (msec) : 2=0.03%, 50=0.14% 00:10:58.987 cpu : usr=1.93%, sys=5.45%, ctx=9411, majf=0, minf=1 00:10:58.987 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.987 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.987 issued rwts: total=9409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.987 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.987 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=144151: Fri Dec 6 19:09:43 2024 00:10:58.987 read: IOPS=2923, BW=11.4MiB/s (12.0MB/s)(33.8MiB/2962msec) 00:10:58.987 slat (nsec): min=4903, max=73032, avg=13824.87, stdev=6533.18 00:10:58.987 clat (usec): min=182, max=41982, avg=322.18, stdev=1392.51 00:10:58.987 lat (usec): min=189, max=41996, avg=336.00, stdev=1392.76 00:10:58.987 clat percentiles (usec): 00:10:58.987 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 223], 00:10:58.987 | 30.00th=[ 235], 40.00th=[ 253], 50.00th=[ 269], 60.00th=[ 281], 00:10:58.987 | 70.00th=[ 297], 80.00th=[ 322], 90.00th=[ 347], 95.00th=[ 367], 00:10:58.987 | 99.00th=[ 474], 99.50th=[ 506], 99.90th=[40633], 99.95th=[41157], 00:10:58.987 | 99.99th=[42206] 00:10:58.987 bw ( KiB/s): min=10168, max=14184, per=37.94%, avg=12472.00, stdev=1632.05, samples=5 00:10:58.987 iops : min= 2542, max= 3546, avg=3118.00, stdev=408.01, samples=5 00:10:58.987 lat (usec) : 250=38.84%, 500=60.61%, 750=0.38%, 1000=0.01% 00:10:58.987 lat (msec) : 2=0.02%, 10=0.01%, 50=0.12% 00:10:58.987 cpu : usr=2.33%, sys=6.08%, ctx=8662, majf=0, minf=1 00:10:58.987 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.987 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.987 issued rwts: total=8659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.987 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.987 00:10:58.987 Run status group 0 (all jobs): 00:10:58.987 READ: bw=32.1MiB/s (33.7MB/s), 2596KiB/s-11.5MiB/s (2658kB/s-12.1MB/s), io=124MiB (130MB), run=2962-3866msec 00:10:58.987 00:10:58.987 Disk stats (read/write): 00:10:58.987 nvme0n1: ios=2303/0, merge=0/0, ticks=3271/0, in_queue=3271, util=95.71% 00:10:58.987 nvme0n2: ios=11395/0, merge=0/0, ticks=3525/0, in_queue=3525, util=95.68% 00:10:58.987 nvme0n3: ios=8824/0, merge=0/0, ticks=2874/0, in_queue=2874, util=96.26% 00:10:58.987 nvme0n4: ios=8703/0, merge=0/0, ticks=3162/0, in_queue=3162, util=99.09% 00:10:59.245 19:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:59.245 19:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:59.503 19:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:59.503 19:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:59.763 19:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:59.763 19:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:00.022 19:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:00.022 19:09:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:00.279 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:00.279 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 144059 00:11:00.279 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:00.279 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:00.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.279 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:00.279 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:00.279 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:00.279 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.279 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:00.279 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.279 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:00.279 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:00.279 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:00.279 nvmf hotplug test: fio failed as expected 00:11:00.279 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:00.843 rmmod nvme_tcp 00:11:00.843 rmmod nvme_fabrics 00:11:00.843 rmmod nvme_keyring 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 142019 ']' 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 142019 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 142019 ']' 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 142019 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 142019 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 142019' 00:11:00.843 killing process with pid 142019 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 142019 00:11:00.843 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 142019 00:11:01.102 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:01.102 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:01.102 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:01.102 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:01.102 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:01.102 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:01.102 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:01.102 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:01.102 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:01.102 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.102 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.102 19:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.010 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:03.010 00:11:03.010 real 0m24.339s 00:11:03.010 user 1m25.142s 00:11:03.010 sys 0m7.774s 00:11:03.010 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.010 19:09:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.010 ************************************ 00:11:03.010 END TEST nvmf_fio_target 00:11:03.010 ************************************ 00:11:03.010 19:09:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:03.010 19:09:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:03.010 19:09:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.010 19:09:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:03.010 ************************************ 00:11:03.010 START TEST nvmf_bdevio 00:11:03.011 ************************************ 00:11:03.011 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:03.269 * Looking for test storage... 00:11:03.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:03.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.269 --rc genhtml_branch_coverage=1 00:11:03.269 --rc genhtml_function_coverage=1 00:11:03.269 --rc genhtml_legend=1 00:11:03.269 --rc geninfo_all_blocks=1 00:11:03.269 --rc geninfo_unexecuted_blocks=1 00:11:03.269 00:11:03.269 ' 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:03.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.269 --rc genhtml_branch_coverage=1 00:11:03.269 --rc genhtml_function_coverage=1 00:11:03.269 --rc genhtml_legend=1 00:11:03.269 --rc geninfo_all_blocks=1 00:11:03.269 --rc geninfo_unexecuted_blocks=1 00:11:03.269 00:11:03.269 ' 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:03.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.269 --rc genhtml_branch_coverage=1 00:11:03.269 --rc genhtml_function_coverage=1 00:11:03.269 --rc genhtml_legend=1 00:11:03.269 --rc geninfo_all_blocks=1 00:11:03.269 --rc geninfo_unexecuted_blocks=1 00:11:03.269 00:11:03.269 ' 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:03.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.269 --rc genhtml_branch_coverage=1 00:11:03.269 --rc genhtml_function_coverage=1 00:11:03.269 --rc genhtml_legend=1 00:11:03.269 --rc geninfo_all_blocks=1 00:11:03.269 --rc geninfo_unexecuted_blocks=1 00:11:03.269 00:11:03.269 ' 00:11:03.269 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:03.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:03.270 19:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:05.807 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:05.807 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:05.807 Found net devices under 0000:84:00.0: cvl_0_0 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:05.807 Found net devices under 0000:84:00.1: cvl_0_1 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:05.807 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:05.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:11:05.808 00:11:05.808 --- 10.0.0.2 ping statistics --- 00:11:05.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.808 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:11:05.808 00:11:05.808 --- 10.0.0.1 ping statistics --- 00:11:05.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.808 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=146808 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 146808 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 146808 ']' 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.808 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.808 [2024-12-06 19:09:50.650559] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:11:05.808 [2024-12-06 19:09:50.650629] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.808 [2024-12-06 19:09:50.727405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.808 [2024-12-06 19:09:50.788168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.808 [2024-12-06 19:09:50.788253] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.808 [2024-12-06 19:09:50.788266] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.808 [2024-12-06 19:09:50.788277] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.808 [2024-12-06 19:09:50.788286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.808 [2024-12-06 19:09:50.790244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:05.808 [2024-12-06 19:09:50.790286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:05.808 [2024-12-06 19:09:50.790343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:05.808 [2024-12-06 19:09:50.790347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.067 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.067 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:06.067 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:06.067 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:06.067 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:06.067 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.067 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.067 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.067 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:06.067 [2024-12-06 19:09:50.952675] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.067 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.067 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:06.067 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.067 19:09:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:06.067 Malloc0 00:11:06.067 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.067 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:06.067 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.067 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:06.067 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.067 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:06.067 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.067 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:06.067 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.067 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.067 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.068 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:06.068 [2024-12-06 19:09:51.023775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.068 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.068 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:06.068 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:06.068 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:06.068 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:06.068 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:06.068 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:06.068 { 00:11:06.068 "params": { 00:11:06.068 "name": "Nvme$subsystem", 00:11:06.068 "trtype": "$TEST_TRANSPORT", 00:11:06.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:06.068 "adrfam": "ipv4", 00:11:06.068 "trsvcid": "$NVMF_PORT", 00:11:06.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:06.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:06.068 "hdgst": ${hdgst:-false}, 00:11:06.068 "ddgst": ${ddgst:-false} 00:11:06.068 }, 00:11:06.068 "method": "bdev_nvme_attach_controller" 00:11:06.068 } 00:11:06.068 EOF 00:11:06.068 )") 00:11:06.068 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:06.068 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:06.068 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:06.068 19:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:06.068 "params": { 00:11:06.068 "name": "Nvme1", 00:11:06.068 "trtype": "tcp", 00:11:06.068 "traddr": "10.0.0.2", 00:11:06.068 "adrfam": "ipv4", 00:11:06.068 "trsvcid": "4420", 00:11:06.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:06.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:06.068 "hdgst": false, 00:11:06.068 "ddgst": false 00:11:06.068 }, 00:11:06.068 "method": "bdev_nvme_attach_controller" 00:11:06.068 }' 00:11:06.068 [2024-12-06 19:09:51.075129] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:11:06.068 [2024-12-06 19:09:51.075217] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146954 ] 00:11:06.327 [2024-12-06 19:09:51.146281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:06.327 [2024-12-06 19:09:51.211746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.327 [2024-12-06 19:09:51.211776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.327 [2024-12-06 19:09:51.211780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.586 I/O targets: 00:11:06.586 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:06.586 00:11:06.586 00:11:06.586 CUnit - A unit testing framework for C - Version 2.1-3 00:11:06.586 http://cunit.sourceforge.net/ 00:11:06.586 00:11:06.586 00:11:06.586 Suite: bdevio tests on: Nvme1n1 00:11:06.586 Test: blockdev write read block ...passed 00:11:06.845 Test: blockdev write zeroes read block ...passed 00:11:06.845 Test: blockdev write zeroes read no split ...passed 00:11:06.845 Test: blockdev write zeroes read split ...passed 00:11:06.845 Test: blockdev write zeroes read split partial ...passed 00:11:06.845 Test: blockdev reset ...[2024-12-06 19:09:51.722825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:06.845 [2024-12-06 19:09:51.722937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdafa70 (9): Bad file descriptor 00:11:06.845 [2024-12-06 19:09:51.735906] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:06.845 passed 00:11:06.845 Test: blockdev write read 8 blocks ...passed 00:11:06.845 Test: blockdev write read size > 128k ...passed 00:11:06.845 Test: blockdev write read invalid size ...passed 00:11:06.845 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:06.845 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:06.845 Test: blockdev write read max offset ...passed 00:11:06.845 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:06.845 Test: blockdev writev readv 8 blocks ...passed 00:11:06.845 Test: blockdev writev readv 30 x 1block ...passed 00:11:07.104 Test: blockdev writev readv block ...passed 00:11:07.104 Test: blockdev writev readv size > 128k ...passed 00:11:07.104 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:07.104 Test: blockdev comparev and writev ...[2024-12-06 19:09:51.947005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.104 [2024-12-06 19:09:51.947041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:07.104 [2024-12-06 19:09:51.947066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.104 [2024-12-06 19:09:51.947084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:07.104 [2024-12-06 19:09:51.947444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.104 [2024-12-06 19:09:51.947469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:07.104 [2024-12-06 19:09:51.947491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.104 [2024-12-06 19:09:51.947507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:07.104 [2024-12-06 19:09:51.947881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.104 [2024-12-06 19:09:51.947905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:07.104 [2024-12-06 19:09:51.947926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.104 [2024-12-06 19:09:51.947943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:07.104 [2024-12-06 19:09:51.948307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.104 [2024-12-06 19:09:51.948331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:07.104 [2024-12-06 19:09:51.948364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.104 [2024-12-06 19:09:51.948382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:07.104 passed 00:11:07.104 Test: blockdev nvme passthru rw ...passed 00:11:07.104 Test: blockdev nvme passthru vendor specific ...[2024-12-06 19:09:52.030162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:07.104 [2024-12-06 19:09:52.030189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:07.104 [2024-12-06 19:09:52.030458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:07.104 [2024-12-06 19:09:52.030482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:07.104 [2024-12-06 19:09:52.030638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:07.104 [2024-12-06 19:09:52.030661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:07.104 [2024-12-06 19:09:52.030820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:07.104 [2024-12-06 19:09:52.030844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:07.104 passed 00:11:07.104 Test: blockdev nvme admin passthru ...passed 00:11:07.104 Test: blockdev copy ...passed 00:11:07.104 00:11:07.104 Run Summary: Type Total Ran Passed Failed Inactive 00:11:07.104 suites 1 1 n/a 0 0 00:11:07.104 tests 23 23 23 0 0 00:11:07.104 asserts 152 152 152 0 n/a 00:11:07.104 00:11:07.104 Elapsed time = 1.036 seconds 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:07.363 rmmod nvme_tcp 00:11:07.363 rmmod nvme_fabrics 00:11:07.363 rmmod nvme_keyring 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 146808 ']' 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 146808 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 146808 ']' 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 146808 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146808 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146808' 00:11:07.363 killing process with pid 146808 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 146808 00:11:07.363 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 146808 00:11:07.624 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:07.624 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:07.624 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:07.624 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:07.624 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:07.624 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:07.624 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:07.624 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:07.624 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:07.624 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.624 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.624 19:09:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.164 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:10.164 00:11:10.164 real 0m6.654s 00:11:10.164 user 0m10.713s 00:11:10.164 sys 0m2.277s 00:11:10.164 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.164 19:09:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.164 ************************************ 00:11:10.164 END TEST nvmf_bdevio 00:11:10.164 ************************************ 00:11:10.164 19:09:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:10.164 00:11:10.164 real 3m57.441s 00:11:10.164 user 10m18.783s 00:11:10.164 sys 1m10.614s 00:11:10.164 19:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.164 19:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:10.164 ************************************ 00:11:10.164 END TEST nvmf_target_core 00:11:10.164 ************************************ 00:11:10.164 19:09:54 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:10.164 19:09:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:10.164 19:09:54 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.164 19:09:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:10.164 ************************************ 00:11:10.164 START TEST nvmf_target_extra 00:11:10.164 ************************************ 00:11:10.164 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:10.165 * Looking for test storage... 00:11:10.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:10.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.165 --rc genhtml_branch_coverage=1 00:11:10.165 --rc genhtml_function_coverage=1 00:11:10.165 --rc genhtml_legend=1 00:11:10.165 --rc geninfo_all_blocks=1 00:11:10.165 --rc geninfo_unexecuted_blocks=1 00:11:10.165 00:11:10.165 ' 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:10.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.165 --rc genhtml_branch_coverage=1 00:11:10.165 --rc genhtml_function_coverage=1 00:11:10.165 --rc genhtml_legend=1 00:11:10.165 --rc geninfo_all_blocks=1 00:11:10.165 --rc geninfo_unexecuted_blocks=1 00:11:10.165 00:11:10.165 ' 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:10.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.165 --rc genhtml_branch_coverage=1 00:11:10.165 --rc genhtml_function_coverage=1 00:11:10.165 --rc genhtml_legend=1 00:11:10.165 --rc geninfo_all_blocks=1 00:11:10.165 --rc geninfo_unexecuted_blocks=1 00:11:10.165 00:11:10.165 ' 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:10.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.165 --rc genhtml_branch_coverage=1 00:11:10.165 --rc genhtml_function_coverage=1 00:11:10.165 --rc genhtml_legend=1 00:11:10.165 --rc geninfo_all_blocks=1 00:11:10.165 --rc geninfo_unexecuted_blocks=1 00:11:10.165 00:11:10.165 ' 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:10.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:10.165 ************************************ 00:11:10.165 START TEST nvmf_example 00:11:10.165 ************************************ 00:11:10.165 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:10.165 * Looking for test storage... 00:11:10.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:10.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.165 --rc genhtml_branch_coverage=1 00:11:10.165 --rc genhtml_function_coverage=1 00:11:10.165 --rc genhtml_legend=1 00:11:10.165 --rc geninfo_all_blocks=1 00:11:10.165 --rc geninfo_unexecuted_blocks=1 00:11:10.165 00:11:10.165 ' 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:10.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.165 --rc genhtml_branch_coverage=1 00:11:10.165 --rc genhtml_function_coverage=1 00:11:10.165 --rc genhtml_legend=1 00:11:10.165 --rc geninfo_all_blocks=1 00:11:10.165 --rc geninfo_unexecuted_blocks=1 00:11:10.165 00:11:10.165 ' 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:10.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.165 --rc genhtml_branch_coverage=1 00:11:10.165 --rc genhtml_function_coverage=1 00:11:10.165 --rc genhtml_legend=1 00:11:10.165 --rc geninfo_all_blocks=1 00:11:10.165 --rc geninfo_unexecuted_blocks=1 00:11:10.165 00:11:10.165 ' 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:10.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.165 --rc genhtml_branch_coverage=1 00:11:10.165 --rc genhtml_function_coverage=1 00:11:10.165 --rc genhtml_legend=1 00:11:10.165 --rc geninfo_all_blocks=1 00:11:10.165 --rc geninfo_unexecuted_blocks=1 00:11:10.165 00:11:10.165 ' 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.165 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:10.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:10.166 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.697 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:12.697 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:12.697 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:12.697 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:12.697 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:12.697 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:12.697 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:12.698 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:12.698 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:12.698 Found net devices under 0000:84:00.0: cvl_0_0 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:12.698 Found net devices under 0000:84:00.1: cvl_0_1 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:12.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:11:12.698 00:11:12.698 --- 10.0.0.2 ping statistics --- 00:11:12.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.698 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:12.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:11:12.698 00:11:12.698 --- 10.0.0.1 ping statistics --- 00:11:12.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.698 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:12.698 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=149224 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 149224 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 149224 ']' 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.699 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:13.634 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:25.832 Initializing NVMe Controllers 00:11:25.832 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:25.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:25.832 Initialization complete. Launching workers. 00:11:25.832 ======================================================== 00:11:25.832 Latency(us) 00:11:25.832 Device Information : IOPS MiB/s Average min max 00:11:25.832 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14767.36 57.68 4333.16 764.39 15316.71 00:11:25.832 ======================================================== 00:11:25.832 Total : 14767.36 57.68 4333.16 764.39 15316.71 00:11:25.832 00:11:25.832 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:25.832 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.833 rmmod nvme_tcp 00:11:25.833 rmmod nvme_fabrics 00:11:25.833 rmmod nvme_keyring 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 149224 ']' 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 149224 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 149224 ']' 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 149224 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 149224 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 149224' 00:11:25.833 killing process with pid 149224 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 149224 00:11:25.833 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 149224 00:11:25.833 nvmf threads initialize successfully 00:11:25.833 bdev subsystem init successfully 00:11:25.833 created a nvmf target service 00:11:25.833 create targets's poll groups done 00:11:25.833 all subsystems of target started 00:11:25.833 nvmf target is running 00:11:25.833 all subsystems of target stopped 00:11:25.833 destroy targets's poll groups done 00:11:25.833 destroyed the nvmf target service 00:11:25.833 bdev subsystem finish successfully 00:11:25.833 nvmf threads destroy successfully 00:11:25.833 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:25.833 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:25.833 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:25.833 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:25.833 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:25.833 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:25.833 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:25.833 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:25.833 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:25.833 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.833 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.833 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.400 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:26.401 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:26.401 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.401 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:26.401 00:11:26.401 real 0m16.330s 00:11:26.401 user 0m45.256s 00:11:26.401 sys 0m3.808s 00:11:26.401 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.401 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:26.401 ************************************ 00:11:26.401 END TEST nvmf_example 00:11:26.401 ************************************ 00:11:26.401 19:10:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:26.401 19:10:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:26.401 19:10:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.401 19:10:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:26.401 ************************************ 00:11:26.401 START TEST nvmf_filesystem 00:11:26.401 ************************************ 00:11:26.401 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:26.401 * Looking for test storage... 00:11:26.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.401 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:26.401 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:26.401 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:26.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.664 --rc genhtml_branch_coverage=1 00:11:26.664 --rc genhtml_function_coverage=1 00:11:26.664 --rc genhtml_legend=1 00:11:26.664 --rc geninfo_all_blocks=1 00:11:26.664 --rc geninfo_unexecuted_blocks=1 00:11:26.664 00:11:26.664 ' 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:26.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.664 --rc genhtml_branch_coverage=1 00:11:26.664 --rc genhtml_function_coverage=1 00:11:26.664 --rc genhtml_legend=1 00:11:26.664 --rc geninfo_all_blocks=1 00:11:26.664 --rc geninfo_unexecuted_blocks=1 00:11:26.664 00:11:26.664 ' 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:26.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.664 --rc genhtml_branch_coverage=1 00:11:26.664 --rc genhtml_function_coverage=1 00:11:26.664 --rc genhtml_legend=1 00:11:26.664 --rc geninfo_all_blocks=1 00:11:26.664 --rc geninfo_unexecuted_blocks=1 00:11:26.664 00:11:26.664 ' 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:26.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.664 --rc genhtml_branch_coverage=1 00:11:26.664 --rc genhtml_function_coverage=1 00:11:26.664 --rc genhtml_legend=1 00:11:26.664 --rc geninfo_all_blocks=1 00:11:26.664 --rc geninfo_unexecuted_blocks=1 00:11:26.664 00:11:26.664 ' 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:26.664 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:26.665 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:26.665 #define SPDK_CONFIG_H 00:11:26.665 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:26.665 #define SPDK_CONFIG_APPS 1 00:11:26.665 #define SPDK_CONFIG_ARCH native 00:11:26.665 #undef SPDK_CONFIG_ASAN 00:11:26.665 #undef SPDK_CONFIG_AVAHI 00:11:26.665 #undef SPDK_CONFIG_CET 00:11:26.665 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:26.665 #define SPDK_CONFIG_COVERAGE 1 00:11:26.665 #define SPDK_CONFIG_CROSS_PREFIX 00:11:26.665 #undef SPDK_CONFIG_CRYPTO 00:11:26.665 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:26.665 #undef SPDK_CONFIG_CUSTOMOCF 00:11:26.665 #undef SPDK_CONFIG_DAOS 00:11:26.665 #define SPDK_CONFIG_DAOS_DIR 00:11:26.665 #define SPDK_CONFIG_DEBUG 1 00:11:26.665 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:26.665 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:26.665 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:26.665 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:26.665 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:26.665 #undef SPDK_CONFIG_DPDK_UADK 00:11:26.665 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:26.665 #define SPDK_CONFIG_EXAMPLES 1 00:11:26.665 #undef SPDK_CONFIG_FC 00:11:26.665 #define SPDK_CONFIG_FC_PATH 00:11:26.665 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:26.665 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:26.665 #define SPDK_CONFIG_FSDEV 1 00:11:26.665 #undef SPDK_CONFIG_FUSE 00:11:26.665 #undef SPDK_CONFIG_FUZZER 00:11:26.665 #define SPDK_CONFIG_FUZZER_LIB 00:11:26.665 #undef SPDK_CONFIG_GOLANG 00:11:26.665 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:26.665 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:26.665 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:26.665 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:26.665 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:26.665 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:26.665 #undef SPDK_CONFIG_HAVE_LZ4 00:11:26.665 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:26.665 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:26.665 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:26.665 #define SPDK_CONFIG_IDXD 1 00:11:26.665 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:26.665 #undef SPDK_CONFIG_IPSEC_MB 00:11:26.665 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:26.665 #define SPDK_CONFIG_ISAL 1 00:11:26.665 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:26.665 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:26.665 #define SPDK_CONFIG_LIBDIR 00:11:26.665 #undef SPDK_CONFIG_LTO 00:11:26.665 #define SPDK_CONFIG_MAX_LCORES 128 00:11:26.665 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:26.665 #define SPDK_CONFIG_NVME_CUSE 1 00:11:26.665 #undef SPDK_CONFIG_OCF 00:11:26.665 #define SPDK_CONFIG_OCF_PATH 00:11:26.665 #define SPDK_CONFIG_OPENSSL_PATH 00:11:26.665 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:26.665 #define SPDK_CONFIG_PGO_DIR 00:11:26.665 #undef SPDK_CONFIG_PGO_USE 00:11:26.665 #define SPDK_CONFIG_PREFIX /usr/local 00:11:26.665 #undef SPDK_CONFIG_RAID5F 00:11:26.665 #undef SPDK_CONFIG_RBD 00:11:26.665 #define SPDK_CONFIG_RDMA 1 00:11:26.665 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:26.665 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:26.665 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:26.665 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:26.665 #define SPDK_CONFIG_SHARED 1 00:11:26.665 #undef SPDK_CONFIG_SMA 00:11:26.665 #define SPDK_CONFIG_TESTS 1 00:11:26.665 #undef SPDK_CONFIG_TSAN 00:11:26.665 #define SPDK_CONFIG_UBLK 1 00:11:26.665 #define SPDK_CONFIG_UBSAN 1 00:11:26.665 #undef SPDK_CONFIG_UNIT_TESTS 00:11:26.665 #undef SPDK_CONFIG_URING 00:11:26.665 #define SPDK_CONFIG_URING_PATH 00:11:26.665 #undef SPDK_CONFIG_URING_ZNS 00:11:26.665 #undef SPDK_CONFIG_USDT 00:11:26.665 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:26.665 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:26.665 #define SPDK_CONFIG_VFIO_USER 1 00:11:26.665 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:26.665 #define SPDK_CONFIG_VHOST 1 00:11:26.666 #define SPDK_CONFIG_VIRTIO 1 00:11:26.666 #undef SPDK_CONFIG_VTUNE 00:11:26.666 #define SPDK_CONFIG_VTUNE_DIR 00:11:26.666 #define SPDK_CONFIG_WERROR 1 00:11:26.666 #define SPDK_CONFIG_WPDK_DIR 00:11:26.666 #undef SPDK_CONFIG_XNVME 00:11:26.666 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:26.666 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:26.667 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 150932 ]] 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 150932 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.D2QftL 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.D2QftL/tests/target /tmp/spdk.D2QftL 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:26.668 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39630196736 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=45077106688 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5446909952 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=22528520192 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=22538551296 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=8993042432 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9015422976 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22380544 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=22538244096 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=22538555392 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=311296 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4507697152 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4507709440 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:26.669 * Looking for test storage... 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=39630196736 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=7661502464 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:26.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.669 --rc genhtml_branch_coverage=1 00:11:26.669 --rc genhtml_function_coverage=1 00:11:26.669 --rc genhtml_legend=1 00:11:26.669 --rc geninfo_all_blocks=1 00:11:26.669 --rc geninfo_unexecuted_blocks=1 00:11:26.669 00:11:26.669 ' 00:11:26.669 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:26.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.669 --rc genhtml_branch_coverage=1 00:11:26.669 --rc genhtml_function_coverage=1 00:11:26.669 --rc genhtml_legend=1 00:11:26.670 --rc geninfo_all_blocks=1 00:11:26.670 --rc geninfo_unexecuted_blocks=1 00:11:26.670 00:11:26.670 ' 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:26.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.670 --rc genhtml_branch_coverage=1 00:11:26.670 --rc genhtml_function_coverage=1 00:11:26.670 --rc genhtml_legend=1 00:11:26.670 --rc geninfo_all_blocks=1 00:11:26.670 --rc geninfo_unexecuted_blocks=1 00:11:26.670 00:11:26.670 ' 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:26.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.670 --rc genhtml_branch_coverage=1 00:11:26.670 --rc genhtml_function_coverage=1 00:11:26.670 --rc genhtml_legend=1 00:11:26.670 --rc geninfo_all_blocks=1 00:11:26.670 --rc geninfo_unexecuted_blocks=1 00:11:26.670 00:11:26.670 ' 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:26.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.670 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.929 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:26.929 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:26.929 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:26.929 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:28.837 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:28.837 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:28.837 Found net devices under 0000:84:00.0: cvl_0_0 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:28.837 Found net devices under 0000:84:00.1: cvl_0_1 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:28.837 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:28.838 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.838 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:29.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:11:29.096 00:11:29.096 --- 10.0.0.2 ping statistics --- 00:11:29.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.096 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:11:29.096 00:11:29.096 --- 10.0.0.1 ping statistics --- 00:11:29.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.096 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.096 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.096 ************************************ 00:11:29.096 START TEST nvmf_filesystem_no_in_capsule 00:11:29.096 ************************************ 00:11:29.096 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:29.096 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:29.096 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:29.096 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:29.096 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:29.096 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.096 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=152596 00:11:29.096 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:29.096 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 152596 00:11:29.096 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 152596 ']' 00:11:29.096 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.096 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.096 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.096 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.096 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.096 [2024-12-06 19:10:14.058962] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:11:29.096 [2024-12-06 19:10:14.059037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.096 [2024-12-06 19:10:14.131453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.354 [2024-12-06 19:10:14.187224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.354 [2024-12-06 19:10:14.187282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.354 [2024-12-06 19:10:14.187310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.354 [2024-12-06 19:10:14.187322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.354 [2024-12-06 19:10:14.187332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.354 [2024-12-06 19:10:14.188973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.354 [2024-12-06 19:10:14.189028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.354 [2024-12-06 19:10:14.189072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.354 [2024-12-06 19:10:14.189075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.355 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.355 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:29.355 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:29.355 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:29.355 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.355 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.355 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:29.355 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:29.355 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.355 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.355 [2024-12-06 19:10:14.336315] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.355 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.355 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:29.355 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.355 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.613 Malloc1 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.613 [2024-12-06 19:10:14.540475] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:29.613 { 00:11:29.613 "name": "Malloc1", 00:11:29.613 "aliases": [ 00:11:29.613 "0d17b5fb-6bcb-42a3-8df9-b16ec4ff3f87" 00:11:29.613 ], 00:11:29.613 "product_name": "Malloc disk", 00:11:29.613 "block_size": 512, 00:11:29.613 "num_blocks": 1048576, 00:11:29.613 "uuid": "0d17b5fb-6bcb-42a3-8df9-b16ec4ff3f87", 00:11:29.613 "assigned_rate_limits": { 00:11:29.613 "rw_ios_per_sec": 0, 00:11:29.613 "rw_mbytes_per_sec": 0, 00:11:29.613 "r_mbytes_per_sec": 0, 00:11:29.613 "w_mbytes_per_sec": 0 00:11:29.613 }, 00:11:29.613 "claimed": true, 00:11:29.613 "claim_type": "exclusive_write", 00:11:29.613 "zoned": false, 00:11:29.613 "supported_io_types": { 00:11:29.613 "read": true, 00:11:29.613 "write": true, 00:11:29.613 "unmap": true, 00:11:29.613 "flush": true, 00:11:29.613 "reset": true, 00:11:29.613 "nvme_admin": false, 00:11:29.613 "nvme_io": false, 00:11:29.613 "nvme_io_md": false, 00:11:29.613 "write_zeroes": true, 00:11:29.613 "zcopy": true, 00:11:29.613 "get_zone_info": false, 00:11:29.613 "zone_management": false, 00:11:29.613 "zone_append": false, 00:11:29.613 "compare": false, 00:11:29.613 "compare_and_write": false, 00:11:29.613 "abort": true, 00:11:29.613 "seek_hole": false, 00:11:29.613 "seek_data": false, 00:11:29.613 "copy": true, 00:11:29.613 "nvme_iov_md": false 00:11:29.613 }, 00:11:29.613 "memory_domains": [ 00:11:29.613 { 00:11:29.613 "dma_device_id": "system", 00:11:29.613 "dma_device_type": 1 00:11:29.613 }, 00:11:29.613 { 00:11:29.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.613 "dma_device_type": 2 00:11:29.613 } 00:11:29.613 ], 00:11:29.613 "driver_specific": {} 00:11:29.613 } 00:11:29.613 ]' 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:29.613 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.547 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:30.547 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:30.547 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.547 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:30.547 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:32.442 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:32.442 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:32.442 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.442 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:32.442 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.442 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:32.442 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:32.442 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:32.442 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:32.442 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:32.442 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:32.442 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:32.442 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:32.442 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:32.442 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:32.442 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:32.442 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:32.699 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:32.957 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:34.328 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:34.328 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:34.328 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:34.328 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.328 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.328 ************************************ 00:11:34.328 START TEST filesystem_ext4 00:11:34.328 ************************************ 00:11:34.328 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:34.328 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:34.328 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:34.328 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:34.328 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:34.328 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:34.328 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:34.328 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:34.328 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:34.328 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:34.328 19:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:34.328 mke2fs 1.47.0 (5-Feb-2023) 00:11:34.328 Discarding device blocks: 0/522240 done 00:11:34.328 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:34.328 Filesystem UUID: b70957a8-de9a-40e4-945a-0d7971fa1997 00:11:34.328 Superblock backups stored on blocks: 00:11:34.328 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:34.328 00:11:34.328 Allocating group tables: 0/64 done 00:11:34.328 Writing inode tables: 0/64 done 00:11:36.857 Creating journal (8192 blocks): done 00:11:36.857 Writing superblocks and filesystem accounting information: 0/64 done 00:11:36.857 00:11:36.857 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:36.857 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 152596 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:43.416 00:11:43.416 real 0m8.602s 00:11:43.416 user 0m0.013s 00:11:43.416 sys 0m0.110s 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:43.416 ************************************ 00:11:43.416 END TEST filesystem_ext4 00:11:43.416 ************************************ 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.416 ************************************ 00:11:43.416 START TEST filesystem_btrfs 00:11:43.416 ************************************ 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:43.416 btrfs-progs v6.8.1 00:11:43.416 See https://btrfs.readthedocs.io for more information. 00:11:43.416 00:11:43.416 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:43.416 NOTE: several default settings have changed in version 5.15, please make sure 00:11:43.416 this does not affect your deployments: 00:11:43.416 - DUP for metadata (-m dup) 00:11:43.416 - enabled no-holes (-O no-holes) 00:11:43.416 - enabled free-space-tree (-R free-space-tree) 00:11:43.416 00:11:43.416 Label: (null) 00:11:43.416 UUID: 1818d75f-8cd1-4e6e-9e0c-3b470326b930 00:11:43.416 Node size: 16384 00:11:43.416 Sector size: 4096 (CPU page size: 4096) 00:11:43.416 Filesystem size: 510.00MiB 00:11:43.416 Block group profiles: 00:11:43.416 Data: single 8.00MiB 00:11:43.416 Metadata: DUP 32.00MiB 00:11:43.416 System: DUP 8.00MiB 00:11:43.416 SSD detected: yes 00:11:43.416 Zoned device: no 00:11:43.416 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:43.416 Checksum: crc32c 00:11:43.416 Number of devices: 1 00:11:43.416 Devices: 00:11:43.416 ID SIZE PATH 00:11:43.416 1 510.00MiB /dev/nvme0n1p1 00:11:43.416 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:43.416 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 152596 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:43.675 00:11:43.675 real 0m0.944s 00:11:43.675 user 0m0.014s 00:11:43.675 sys 0m0.138s 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:43.675 ************************************ 00:11:43.675 END TEST filesystem_btrfs 00:11:43.675 ************************************ 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.675 ************************************ 00:11:43.675 START TEST filesystem_xfs 00:11:43.675 ************************************ 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:43.675 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:43.934 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:43.934 = sectsz=512 attr=2, projid32bit=1 00:11:43.934 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:43.934 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:43.934 data = bsize=4096 blocks=130560, imaxpct=25 00:11:43.934 = sunit=0 swidth=0 blks 00:11:43.934 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:43.934 log =internal log bsize=4096 blocks=16384, version=2 00:11:43.934 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:43.934 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:44.499 Discarding blocks...Done. 00:11:44.499 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:44.499 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:47.024 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:47.024 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:47.024 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:47.024 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:47.024 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:47.025 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:47.025 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 152596 00:11:47.025 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:47.025 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:47.025 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:47.025 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:47.025 00:11:47.025 real 0m3.256s 00:11:47.025 user 0m0.011s 00:11:47.025 sys 0m0.094s 00:11:47.025 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.025 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:47.025 ************************************ 00:11:47.025 END TEST filesystem_xfs 00:11:47.025 ************************************ 00:11:47.025 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 152596 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 152596 ']' 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 152596 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 152596 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 152596' 00:11:47.283 killing process with pid 152596 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 152596 00:11:47.283 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 152596 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:47.848 00:11:47.848 real 0m18.728s 00:11:47.848 user 1m12.510s 00:11:47.848 sys 0m2.452s 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.848 ************************************ 00:11:47.848 END TEST nvmf_filesystem_no_in_capsule 00:11:47.848 ************************************ 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:47.848 ************************************ 00:11:47.848 START TEST nvmf_filesystem_in_capsule 00:11:47.848 ************************************ 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=154965 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 154965 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 154965 ']' 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.848 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.848 [2024-12-06 19:10:32.849754] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:11:47.848 [2024-12-06 19:10:32.849830] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.105 [2024-12-06 19:10:32.924691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:48.105 [2024-12-06 19:10:32.981956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.105 [2024-12-06 19:10:32.982019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.105 [2024-12-06 19:10:32.982046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:48.105 [2024-12-06 19:10:32.982057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:48.105 [2024-12-06 19:10:32.982066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.105 [2024-12-06 19:10:32.983599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.105 [2024-12-06 19:10:32.983715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.105 [2024-12-06 19:10:32.983830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.105 [2024-12-06 19:10:32.983824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.105 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.105 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:48.105 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:48.105 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:48.105 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.105 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.105 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:48.105 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:48.105 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.105 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.105 [2024-12-06 19:10:33.121861] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.105 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.105 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:48.105 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.105 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.362 Malloc1 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.362 [2024-12-06 19:10:33.296280] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:48.362 { 00:11:48.362 "name": "Malloc1", 00:11:48.362 "aliases": [ 00:11:48.362 "11d28a42-1072-48c4-8d22-c3ab2cbd1b21" 00:11:48.362 ], 00:11:48.362 "product_name": "Malloc disk", 00:11:48.362 "block_size": 512, 00:11:48.362 "num_blocks": 1048576, 00:11:48.362 "uuid": "11d28a42-1072-48c4-8d22-c3ab2cbd1b21", 00:11:48.362 "assigned_rate_limits": { 00:11:48.362 "rw_ios_per_sec": 0, 00:11:48.362 "rw_mbytes_per_sec": 0, 00:11:48.362 "r_mbytes_per_sec": 0, 00:11:48.362 "w_mbytes_per_sec": 0 00:11:48.362 }, 00:11:48.362 "claimed": true, 00:11:48.362 "claim_type": "exclusive_write", 00:11:48.362 "zoned": false, 00:11:48.362 "supported_io_types": { 00:11:48.362 "read": true, 00:11:48.362 "write": true, 00:11:48.362 "unmap": true, 00:11:48.362 "flush": true, 00:11:48.362 "reset": true, 00:11:48.362 "nvme_admin": false, 00:11:48.362 "nvme_io": false, 00:11:48.362 "nvme_io_md": false, 00:11:48.362 "write_zeroes": true, 00:11:48.362 "zcopy": true, 00:11:48.362 "get_zone_info": false, 00:11:48.362 "zone_management": false, 00:11:48.362 "zone_append": false, 00:11:48.362 "compare": false, 00:11:48.362 "compare_and_write": false, 00:11:48.362 "abort": true, 00:11:48.362 "seek_hole": false, 00:11:48.362 "seek_data": false, 00:11:48.362 "copy": true, 00:11:48.362 "nvme_iov_md": false 00:11:48.362 }, 00:11:48.362 "memory_domains": [ 00:11:48.362 { 00:11:48.362 "dma_device_id": "system", 00:11:48.362 "dma_device_type": 1 00:11:48.362 }, 00:11:48.362 { 00:11:48.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.362 "dma_device_type": 2 00:11:48.362 } 00:11:48.362 ], 00:11:48.362 "driver_specific": {} 00:11:48.362 } 00:11:48.362 ]' 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:48.362 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:49.292 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:49.292 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:49.292 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.292 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:49.292 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:51.188 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:51.188 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:51.188 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.188 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:51.188 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.188 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:51.188 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:51.188 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:51.188 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:51.188 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:51.188 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:51.188 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:51.188 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:51.188 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:51.188 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:51.188 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:51.188 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:51.445 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:52.379 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:53.312 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:53.312 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:53.312 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:53.312 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.312 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.312 ************************************ 00:11:53.312 START TEST filesystem_in_capsule_ext4 00:11:53.312 ************************************ 00:11:53.312 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:53.312 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:53.312 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:53.312 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:53.312 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:53.312 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:53.312 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:53.313 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:53.313 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:53.313 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:53.313 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:53.313 mke2fs 1.47.0 (5-Feb-2023) 00:11:53.571 Discarding device blocks: 0/522240 done 00:11:53.571 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:53.571 Filesystem UUID: 9d25f892-ca08-493c-a143-322f4903c2cc 00:11:53.571 Superblock backups stored on blocks: 00:11:53.571 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:53.571 00:11:53.571 Allocating group tables: 0/64 done 00:11:53.571 Writing inode tables: 0/64 done 00:11:53.829 Creating journal (8192 blocks): done 00:11:53.829 Writing superblocks and filesystem accounting information: 0/64 done 00:11:53.829 00:11:53.829 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:53.829 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:59.093 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:59.093 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:59.093 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:59.093 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:59.093 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:59.093 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:59.093 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 154965 00:11:59.093 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:59.093 00:11:59.093 real 0m5.722s 00:11:59.093 user 0m0.020s 00:11:59.093 sys 0m0.055s 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:59.093 ************************************ 00:11:59.093 END TEST filesystem_in_capsule_ext4 00:11:59.093 ************************************ 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.093 ************************************ 00:11:59.093 START TEST filesystem_in_capsule_btrfs 00:11:59.093 ************************************ 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:59.093 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:59.352 btrfs-progs v6.8.1 00:11:59.352 See https://btrfs.readthedocs.io for more information. 00:11:59.352 00:11:59.352 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:59.352 NOTE: several default settings have changed in version 5.15, please make sure 00:11:59.352 this does not affect your deployments: 00:11:59.352 - DUP for metadata (-m dup) 00:11:59.352 - enabled no-holes (-O no-holes) 00:11:59.352 - enabled free-space-tree (-R free-space-tree) 00:11:59.352 00:11:59.352 Label: (null) 00:11:59.352 UUID: 4c6049e4-2770-4fef-912e-f59cc6b2835b 00:11:59.352 Node size: 16384 00:11:59.352 Sector size: 4096 (CPU page size: 4096) 00:11:59.352 Filesystem size: 510.00MiB 00:11:59.352 Block group profiles: 00:11:59.352 Data: single 8.00MiB 00:11:59.352 Metadata: DUP 32.00MiB 00:11:59.352 System: DUP 8.00MiB 00:11:59.352 SSD detected: yes 00:11:59.352 Zoned device: no 00:11:59.352 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:59.352 Checksum: crc32c 00:11:59.352 Number of devices: 1 00:11:59.352 Devices: 00:11:59.352 ID SIZE PATH 00:11:59.352 1 510.00MiB /dev/nvme0n1p1 00:11:59.352 00:11:59.352 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:59.352 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 154965 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:59.919 00:11:59.919 real 0m0.868s 00:11:59.919 user 0m0.012s 00:11:59.919 sys 0m0.103s 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:59.919 ************************************ 00:11:59.919 END TEST filesystem_in_capsule_btrfs 00:11:59.919 ************************************ 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.919 ************************************ 00:11:59.919 START TEST filesystem_in_capsule_xfs 00:11:59.919 ************************************ 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:59.919 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:00.178 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:00.178 = sectsz=512 attr=2, projid32bit=1 00:12:00.178 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:00.178 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:00.178 data = bsize=4096 blocks=130560, imaxpct=25 00:12:00.178 = sunit=0 swidth=0 blks 00:12:00.178 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:00.178 log =internal log bsize=4096 blocks=16384, version=2 00:12:00.178 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:00.178 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:01.110 Discarding blocks...Done. 00:12:01.110 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:01.110 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.634 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.634 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:03.634 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.634 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:03.634 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:03.634 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.634 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 154965 00:12:03.634 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.634 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.634 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.634 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.634 00:12:03.634 real 0m3.503s 00:12:03.634 user 0m0.015s 00:12:03.634 sys 0m0.063s 00:12:03.634 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.634 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:03.634 ************************************ 00:12:03.634 END TEST filesystem_in_capsule_xfs 00:12:03.634 ************************************ 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 154965 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 154965 ']' 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 154965 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.635 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 154965 00:12:03.892 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.892 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.892 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 154965' 00:12:03.892 killing process with pid 154965 00:12:03.892 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 154965 00:12:03.893 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 154965 00:12:04.151 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:04.151 00:12:04.151 real 0m16.324s 00:12:04.151 user 1m3.122s 00:12:04.151 sys 0m2.075s 00:12:04.151 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.151 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.151 ************************************ 00:12:04.151 END TEST nvmf_filesystem_in_capsule 00:12:04.151 ************************************ 00:12:04.151 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:04.151 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:04.151 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:04.151 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:04.151 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:04.151 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:04.151 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:04.151 rmmod nvme_tcp 00:12:04.151 rmmod nvme_fabrics 00:12:04.151 rmmod nvme_keyring 00:12:04.151 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:04.412 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:04.412 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:04.412 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:04.412 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:04.412 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:04.412 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:04.412 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:04.412 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:04.412 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:04.412 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:04.412 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:04.412 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:04.412 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.412 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.412 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.321 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:06.321 00:12:06.321 real 0m39.895s 00:12:06.321 user 2m16.713s 00:12:06.321 sys 0m6.307s 00:12:06.321 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.321 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:06.321 ************************************ 00:12:06.321 END TEST nvmf_filesystem 00:12:06.321 ************************************ 00:12:06.321 19:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:06.321 19:10:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:06.321 19:10:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.321 19:10:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:06.321 ************************************ 00:12:06.321 START TEST nvmf_target_discovery 00:12:06.321 ************************************ 00:12:06.321 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:06.321 * Looking for test storage... 00:12:06.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.321 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:06.321 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:06.321 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:06.580 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:06.580 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.580 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.580 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:06.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.581 --rc genhtml_branch_coverage=1 00:12:06.581 --rc genhtml_function_coverage=1 00:12:06.581 --rc genhtml_legend=1 00:12:06.581 --rc geninfo_all_blocks=1 00:12:06.581 --rc geninfo_unexecuted_blocks=1 00:12:06.581 00:12:06.581 ' 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:06.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.581 --rc genhtml_branch_coverage=1 00:12:06.581 --rc genhtml_function_coverage=1 00:12:06.581 --rc genhtml_legend=1 00:12:06.581 --rc geninfo_all_blocks=1 00:12:06.581 --rc geninfo_unexecuted_blocks=1 00:12:06.581 00:12:06.581 ' 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:06.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.581 --rc genhtml_branch_coverage=1 00:12:06.581 --rc genhtml_function_coverage=1 00:12:06.581 --rc genhtml_legend=1 00:12:06.581 --rc geninfo_all_blocks=1 00:12:06.581 --rc geninfo_unexecuted_blocks=1 00:12:06.581 00:12:06.581 ' 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:06.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.581 --rc genhtml_branch_coverage=1 00:12:06.581 --rc genhtml_function_coverage=1 00:12:06.581 --rc genhtml_legend=1 00:12:06.581 --rc geninfo_all_blocks=1 00:12:06.581 --rc geninfo_unexecuted_blocks=1 00:12:06.581 00:12:06.581 ' 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:06.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:06.581 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.582 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:06.582 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:06.582 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:06.582 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.582 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.582 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.582 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:06.582 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:06.582 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:06.582 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.120 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:09.121 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:09.121 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:09.121 Found net devices under 0000:84:00.0: cvl_0_0 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:09.121 Found net devices under 0000:84:00.1: cvl_0_1 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:09.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:12:09.121 00:12:09.121 --- 10.0.0.2 ping statistics --- 00:12:09.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.121 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:12:09.121 00:12:09.121 --- 10.0.0.1 ping statistics --- 00:12:09.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.121 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:09.121 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.122 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.122 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.122 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=159124 00:12:09.122 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 159124 00:12:09.122 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.122 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 159124 ']' 00:12:09.122 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.122 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.122 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.122 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.122 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.122 [2024-12-06 19:10:53.931430] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:12:09.122 [2024-12-06 19:10:53.931525] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.122 [2024-12-06 19:10:54.006526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.122 [2024-12-06 19:10:54.064107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.122 [2024-12-06 19:10:54.064162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.122 [2024-12-06 19:10:54.064175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.122 [2024-12-06 19:10:54.064186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.122 [2024-12-06 19:10:54.064195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.122 [2024-12-06 19:10:54.065748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.122 [2024-12-06 19:10:54.065788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.122 [2024-12-06 19:10:54.065814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.122 [2024-12-06 19:10:54.065818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.380 [2024-12-06 19:10:54.206832] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.380 Null1 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:09.380 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 [2024-12-06 19:10:54.262900] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 Null2 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 Null3 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 Null4 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:12:09.639 00:12:09.639 Discovery Log Number of Records 6, Generation counter 6 00:12:09.639 =====Discovery Log Entry 0====== 00:12:09.639 trtype: tcp 00:12:09.639 adrfam: ipv4 00:12:09.639 subtype: current discovery subsystem 00:12:09.639 treq: not required 00:12:09.639 portid: 0 00:12:09.639 trsvcid: 4420 00:12:09.639 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:09.639 traddr: 10.0.0.2 00:12:09.639 eflags: explicit discovery connections, duplicate discovery information 00:12:09.639 sectype: none 00:12:09.639 =====Discovery Log Entry 1====== 00:12:09.639 trtype: tcp 00:12:09.639 adrfam: ipv4 00:12:09.639 subtype: nvme subsystem 00:12:09.639 treq: not required 00:12:09.639 portid: 0 00:12:09.639 trsvcid: 4420 00:12:09.639 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:09.639 traddr: 10.0.0.2 00:12:09.639 eflags: none 00:12:09.639 sectype: none 00:12:09.639 =====Discovery Log Entry 2====== 00:12:09.639 trtype: tcp 00:12:09.639 adrfam: ipv4 00:12:09.639 subtype: nvme subsystem 00:12:09.639 treq: not required 00:12:09.639 portid: 0 00:12:09.639 trsvcid: 4420 00:12:09.639 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:09.639 traddr: 10.0.0.2 00:12:09.639 eflags: none 00:12:09.639 sectype: none 00:12:09.639 =====Discovery Log Entry 3====== 00:12:09.639 trtype: tcp 00:12:09.639 adrfam: ipv4 00:12:09.639 subtype: nvme subsystem 00:12:09.639 treq: not required 00:12:09.639 portid: 0 00:12:09.639 trsvcid: 4420 00:12:09.639 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:09.639 traddr: 10.0.0.2 00:12:09.639 eflags: none 00:12:09.639 sectype: none 00:12:09.639 =====Discovery Log Entry 4====== 00:12:09.639 trtype: tcp 00:12:09.639 adrfam: ipv4 00:12:09.639 subtype: nvme subsystem 00:12:09.639 treq: not required 00:12:09.639 portid: 0 00:12:09.639 trsvcid: 4420 00:12:09.639 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:09.639 traddr: 10.0.0.2 00:12:09.639 eflags: none 00:12:09.639 sectype: none 00:12:09.639 =====Discovery Log Entry 5====== 00:12:09.639 trtype: tcp 00:12:09.639 adrfam: ipv4 00:12:09.639 subtype: discovery subsystem referral 00:12:09.639 treq: not required 00:12:09.639 portid: 0 00:12:09.639 trsvcid: 4430 00:12:09.639 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:09.639 traddr: 10.0.0.2 00:12:09.639 eflags: none 00:12:09.639 sectype: none 00:12:09.639 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:09.639 Perform nvmf subsystem discovery via RPC 00:12:09.639 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:09.639 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.639 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.639 [ 00:12:09.639 { 00:12:09.639 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:09.639 "subtype": "Discovery", 00:12:09.639 "listen_addresses": [ 00:12:09.639 { 00:12:09.640 "trtype": "TCP", 00:12:09.640 "adrfam": "IPv4", 00:12:09.640 "traddr": "10.0.0.2", 00:12:09.640 "trsvcid": "4420" 00:12:09.640 } 00:12:09.640 ], 00:12:09.640 "allow_any_host": true, 00:12:09.640 "hosts": [] 00:12:09.640 }, 00:12:09.640 { 00:12:09.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.640 "subtype": "NVMe", 00:12:09.640 "listen_addresses": [ 00:12:09.640 { 00:12:09.640 "trtype": "TCP", 00:12:09.640 "adrfam": "IPv4", 00:12:09.640 "traddr": "10.0.0.2", 00:12:09.640 "trsvcid": "4420" 00:12:09.640 } 00:12:09.640 ], 00:12:09.640 "allow_any_host": true, 00:12:09.640 "hosts": [], 00:12:09.640 "serial_number": "SPDK00000000000001", 00:12:09.640 "model_number": "SPDK bdev Controller", 00:12:09.640 "max_namespaces": 32, 00:12:09.640 "min_cntlid": 1, 00:12:09.640 "max_cntlid": 65519, 00:12:09.640 "namespaces": [ 00:12:09.640 { 00:12:09.640 "nsid": 1, 00:12:09.640 "bdev_name": "Null1", 00:12:09.640 "name": "Null1", 00:12:09.640 "nguid": "B90B4DBB8E38432CA51F727BE8AA4610", 00:12:09.640 "uuid": "b90b4dbb-8e38-432c-a51f-727be8aa4610" 00:12:09.640 } 00:12:09.640 ] 00:12:09.640 }, 00:12:09.640 { 00:12:09.640 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:09.640 "subtype": "NVMe", 00:12:09.640 "listen_addresses": [ 00:12:09.640 { 00:12:09.640 "trtype": "TCP", 00:12:09.640 "adrfam": "IPv4", 00:12:09.640 "traddr": "10.0.0.2", 00:12:09.640 "trsvcid": "4420" 00:12:09.640 } 00:12:09.640 ], 00:12:09.640 "allow_any_host": true, 00:12:09.640 "hosts": [], 00:12:09.640 "serial_number": "SPDK00000000000002", 00:12:09.640 "model_number": "SPDK bdev Controller", 00:12:09.640 "max_namespaces": 32, 00:12:09.640 "min_cntlid": 1, 00:12:09.640 "max_cntlid": 65519, 00:12:09.640 "namespaces": [ 00:12:09.640 { 00:12:09.640 "nsid": 1, 00:12:09.640 "bdev_name": "Null2", 00:12:09.640 "name": "Null2", 00:12:09.640 "nguid": "54FEE654D6CD4320937A13DC9B138857", 00:12:09.640 "uuid": "54fee654-d6cd-4320-937a-13dc9b138857" 00:12:09.640 } 00:12:09.640 ] 00:12:09.640 }, 00:12:09.640 { 00:12:09.640 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:09.640 "subtype": "NVMe", 00:12:09.640 "listen_addresses": [ 00:12:09.640 { 00:12:09.640 "trtype": "TCP", 00:12:09.640 "adrfam": "IPv4", 00:12:09.640 "traddr": "10.0.0.2", 00:12:09.640 "trsvcid": "4420" 00:12:09.640 } 00:12:09.640 ], 00:12:09.640 "allow_any_host": true, 00:12:09.640 "hosts": [], 00:12:09.640 "serial_number": "SPDK00000000000003", 00:12:09.640 "model_number": "SPDK bdev Controller", 00:12:09.640 "max_namespaces": 32, 00:12:09.640 "min_cntlid": 1, 00:12:09.640 "max_cntlid": 65519, 00:12:09.640 "namespaces": [ 00:12:09.640 { 00:12:09.640 "nsid": 1, 00:12:09.640 "bdev_name": "Null3", 00:12:09.640 "name": "Null3", 00:12:09.640 "nguid": "D6C09AB15F314BFCB49685E24B648B20", 00:12:09.640 "uuid": "d6c09ab1-5f31-4bfc-b496-85e24b648b20" 00:12:09.640 } 00:12:09.640 ] 00:12:09.640 }, 00:12:09.640 { 00:12:09.640 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:09.640 "subtype": "NVMe", 00:12:09.640 "listen_addresses": [ 00:12:09.640 { 00:12:09.640 "trtype": "TCP", 00:12:09.640 "adrfam": "IPv4", 00:12:09.640 "traddr": "10.0.0.2", 00:12:09.640 "trsvcid": "4420" 00:12:09.640 } 00:12:09.640 ], 00:12:09.640 "allow_any_host": true, 00:12:09.640 "hosts": [], 00:12:09.640 "serial_number": "SPDK00000000000004", 00:12:09.640 "model_number": "SPDK bdev Controller", 00:12:09.640 "max_namespaces": 32, 00:12:09.640 "min_cntlid": 1, 00:12:09.640 "max_cntlid": 65519, 00:12:09.640 "namespaces": [ 00:12:09.640 { 00:12:09.640 "nsid": 1, 00:12:09.640 "bdev_name": "Null4", 00:12:09.640 "name": "Null4", 00:12:09.640 "nguid": "561FEF32C895485B9669322FF1CBF0F0", 00:12:09.640 "uuid": "561fef32-c895-485b-9669-322ff1cbf0f0" 00:12:09.640 } 00:12:09.640 ] 00:12:09.640 } 00:12:09.640 ] 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:09.640 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:09.899 rmmod nvme_tcp 00:12:09.899 rmmod nvme_fabrics 00:12:09.899 rmmod nvme_keyring 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 159124 ']' 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 159124 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 159124 ']' 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 159124 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 159124 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 159124' 00:12:09.899 killing process with pid 159124 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 159124 00:12:09.899 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 159124 00:12:10.172 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:10.172 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:10.172 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:10.172 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:10.172 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:10.172 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:10.172 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:10.172 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:10.172 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:10.172 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.172 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.173 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.086 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:12.086 00:12:12.086 real 0m5.751s 00:12:12.086 user 0m4.705s 00:12:12.086 sys 0m2.055s 00:12:12.086 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.086 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.086 ************************************ 00:12:12.086 END TEST nvmf_target_discovery 00:12:12.086 ************************************ 00:12:12.086 19:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:12.086 19:10:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:12.086 19:10:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.086 19:10:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.086 ************************************ 00:12:12.086 START TEST nvmf_referrals 00:12:12.086 ************************************ 00:12:12.086 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:12.345 * Looking for test storage... 00:12:12.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:12.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.345 --rc genhtml_branch_coverage=1 00:12:12.345 --rc genhtml_function_coverage=1 00:12:12.345 --rc genhtml_legend=1 00:12:12.345 --rc geninfo_all_blocks=1 00:12:12.345 --rc geninfo_unexecuted_blocks=1 00:12:12.345 00:12:12.345 ' 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:12.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.345 --rc genhtml_branch_coverage=1 00:12:12.345 --rc genhtml_function_coverage=1 00:12:12.345 --rc genhtml_legend=1 00:12:12.345 --rc geninfo_all_blocks=1 00:12:12.345 --rc geninfo_unexecuted_blocks=1 00:12:12.345 00:12:12.345 ' 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:12.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.345 --rc genhtml_branch_coverage=1 00:12:12.345 --rc genhtml_function_coverage=1 00:12:12.345 --rc genhtml_legend=1 00:12:12.345 --rc geninfo_all_blocks=1 00:12:12.345 --rc geninfo_unexecuted_blocks=1 00:12:12.345 00:12:12.345 ' 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:12.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.345 --rc genhtml_branch_coverage=1 00:12:12.345 --rc genhtml_function_coverage=1 00:12:12.345 --rc genhtml_legend=1 00:12:12.345 --rc geninfo_all_blocks=1 00:12:12.345 --rc geninfo_unexecuted_blocks=1 00:12:12.345 00:12:12.345 ' 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.345 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:12.346 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.881 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:14.881 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:14.881 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:14.881 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:14.882 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:14.882 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:14.882 Found net devices under 0000:84:00.0: cvl_0_0 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:14.882 Found net devices under 0000:84:00.1: cvl_0_1 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.882 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:14.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:12:14.883 00:12:14.883 --- 10.0.0.2 ping statistics --- 00:12:14.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.883 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:12:14.883 00:12:14.883 --- 10.0.0.1 ping statistics --- 00:12:14.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.883 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=161244 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 161244 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 161244 ']' 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.883 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.883 [2024-12-06 19:10:59.711359] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:12:14.883 [2024-12-06 19:10:59.711450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.883 [2024-12-06 19:10:59.782891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.883 [2024-12-06 19:10:59.836709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.883 [2024-12-06 19:10:59.836791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.883 [2024-12-06 19:10:59.836806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.883 [2024-12-06 19:10:59.836816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.883 [2024-12-06 19:10:59.836825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.883 [2024-12-06 19:10:59.838413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.883 [2024-12-06 19:10:59.838470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.883 [2024-12-06 19:10:59.838577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.883 [2024-12-06 19:10:59.838580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.143 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.143 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:15.143 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:15.143 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:15.143 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.143 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.143 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.143 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.143 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.143 [2024-12-06 19:10:59.985954] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.143 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.143 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:15.143 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.143 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.143 [2024-12-06 19:11:00.009935] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:15.143 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:15.401 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:15.659 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:15.917 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:15.917 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:15.917 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:15.917 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:15.917 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:15.917 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.917 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:15.917 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:15.917 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:15.917 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:15.917 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:15.917 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.917 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:16.174 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:16.175 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:16.175 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.175 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.175 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.175 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:16.175 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:16.175 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:16.175 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:16.175 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.175 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.175 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:16.175 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.175 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:16.432 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:16.432 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:16.432 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:16.432 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:16.432 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.432 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:16.432 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:16.432 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:16.432 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:16.432 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:16.432 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:16.432 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:16.432 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.432 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:16.689 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:16.689 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:16.689 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:16.690 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:16.690 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.690 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:16.690 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:16.690 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:16.690 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.690 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.690 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.690 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:16.690 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:16.690 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.690 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.690 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:16.947 rmmod nvme_tcp 00:12:16.947 rmmod nvme_fabrics 00:12:16.947 rmmod nvme_keyring 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 161244 ']' 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 161244 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 161244 ']' 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 161244 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.947 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 161244 00:12:17.274 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.274 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.274 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 161244' 00:12:17.274 killing process with pid 161244 00:12:17.274 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 161244 00:12:17.274 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 161244 00:12:17.274 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:17.274 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:17.274 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:17.274 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:17.274 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:17.274 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:17.274 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:17.274 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:17.274 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:17.274 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.274 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.274 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.329 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:19.329 00:12:19.329 real 0m7.164s 00:12:19.329 user 0m11.045s 00:12:19.329 sys 0m2.411s 00:12:19.329 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.329 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.329 ************************************ 00:12:19.329 END TEST nvmf_referrals 00:12:19.329 ************************************ 00:12:19.329 19:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:19.329 19:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:19.329 19:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.329 19:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:19.329 ************************************ 00:12:19.329 START TEST nvmf_connect_disconnect 00:12:19.329 ************************************ 00:12:19.329 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:19.329 * Looking for test storage... 00:12:19.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.614 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.615 --rc genhtml_branch_coverage=1 00:12:19.615 --rc genhtml_function_coverage=1 00:12:19.615 --rc genhtml_legend=1 00:12:19.615 --rc geninfo_all_blocks=1 00:12:19.615 --rc geninfo_unexecuted_blocks=1 00:12:19.615 00:12:19.615 ' 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.615 --rc genhtml_branch_coverage=1 00:12:19.615 --rc genhtml_function_coverage=1 00:12:19.615 --rc genhtml_legend=1 00:12:19.615 --rc geninfo_all_blocks=1 00:12:19.615 --rc geninfo_unexecuted_blocks=1 00:12:19.615 00:12:19.615 ' 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.615 --rc genhtml_branch_coverage=1 00:12:19.615 --rc genhtml_function_coverage=1 00:12:19.615 --rc genhtml_legend=1 00:12:19.615 --rc geninfo_all_blocks=1 00:12:19.615 --rc geninfo_unexecuted_blocks=1 00:12:19.615 00:12:19.615 ' 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.615 --rc genhtml_branch_coverage=1 00:12:19.615 --rc genhtml_function_coverage=1 00:12:19.615 --rc genhtml_legend=1 00:12:19.615 --rc geninfo_all_blocks=1 00:12:19.615 --rc geninfo_unexecuted_blocks=1 00:12:19.615 00:12:19.615 ' 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:19.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:19.615 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:21.688 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:21.688 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:21.688 Found net devices under 0000:84:00.0: cvl_0_0 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:21.688 Found net devices under 0000:84:00.1: cvl_0_1 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:21.688 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.689 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:21.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:12:21.976 00:12:21.976 --- 10.0.0.2 ping statistics --- 00:12:21.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.976 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:12:21.976 00:12:21.976 --- 10.0.0.1 ping statistics --- 00:12:21.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.976 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=163579 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 163579 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 163579 ']' 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.976 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:21.976 [2024-12-06 19:11:06.829228] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:12:21.976 [2024-12-06 19:11:06.829322] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.976 [2024-12-06 19:11:06.903439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.976 [2024-12-06 19:11:06.962180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.976 [2024-12-06 19:11:06.962248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.976 [2024-12-06 19:11:06.962261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.976 [2024-12-06 19:11:06.962272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.976 [2024-12-06 19:11:06.962282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.976 [2024-12-06 19:11:06.963992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.976 [2024-12-06 19:11:06.964050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.976 [2024-12-06 19:11:06.964116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.976 [2024-12-06 19:11:06.964119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:22.278 [2024-12-06 19:11:07.113968] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:22.278 [2024-12-06 19:11:07.179049] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:22.278 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:24.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.628 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:36.628 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:36.628 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:36.628 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:36.628 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:36.628 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:36.628 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:36.628 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:36.628 rmmod nvme_tcp 00:12:36.628 rmmod nvme_fabrics 00:12:36.628 rmmod nvme_keyring 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 163579 ']' 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 163579 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 163579 ']' 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 163579 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 163579 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 163579' 00:12:36.629 killing process with pid 163579 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 163579 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 163579 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.629 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.536 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:38.536 00:12:38.536 real 0m19.098s 00:12:38.536 user 0m57.190s 00:12:38.536 sys 0m3.510s 00:12:38.536 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.536 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:38.536 ************************************ 00:12:38.536 END TEST nvmf_connect_disconnect 00:12:38.536 ************************************ 00:12:38.536 19:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:38.536 19:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:38.536 19:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.536 19:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:38.536 ************************************ 00:12:38.536 START TEST nvmf_multitarget 00:12:38.536 ************************************ 00:12:38.536 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:38.536 * Looking for test storage... 00:12:38.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:38.536 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:38.536 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:12:38.536 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:38.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.798 --rc genhtml_branch_coverage=1 00:12:38.798 --rc genhtml_function_coverage=1 00:12:38.798 --rc genhtml_legend=1 00:12:38.798 --rc geninfo_all_blocks=1 00:12:38.798 --rc geninfo_unexecuted_blocks=1 00:12:38.798 00:12:38.798 ' 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:38.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.798 --rc genhtml_branch_coverage=1 00:12:38.798 --rc genhtml_function_coverage=1 00:12:38.798 --rc genhtml_legend=1 00:12:38.798 --rc geninfo_all_blocks=1 00:12:38.798 --rc geninfo_unexecuted_blocks=1 00:12:38.798 00:12:38.798 ' 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:38.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.798 --rc genhtml_branch_coverage=1 00:12:38.798 --rc genhtml_function_coverage=1 00:12:38.798 --rc genhtml_legend=1 00:12:38.798 --rc geninfo_all_blocks=1 00:12:38.798 --rc geninfo_unexecuted_blocks=1 00:12:38.798 00:12:38.798 ' 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:38.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.798 --rc genhtml_branch_coverage=1 00:12:38.798 --rc genhtml_function_coverage=1 00:12:38.798 --rc genhtml_legend=1 00:12:38.798 --rc geninfo_all_blocks=1 00:12:38.798 --rc geninfo_unexecuted_blocks=1 00:12:38.798 00:12:38.798 ' 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:38.798 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:38.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:38.799 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:41.356 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.356 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:41.356 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:41.357 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:41.357 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:41.357 Found net devices under 0000:84:00.0: cvl_0_0 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:41.357 Found net devices under 0000:84:00.1: cvl_0_1 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:41.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:12:41.357 00:12:41.357 --- 10.0.0.2 ping statistics --- 00:12:41.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.357 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:12:41.357 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:12:41.357 00:12:41.357 --- 10.0.0.1 ping statistics --- 00:12:41.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.358 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=167383 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 167383 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 167383 ']' 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.358 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:41.358 [2024-12-06 19:11:26.033677] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:12:41.358 [2024-12-06 19:11:26.033764] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.358 [2024-12-06 19:11:26.104271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.358 [2024-12-06 19:11:26.161705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.358 [2024-12-06 19:11:26.161780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.358 [2024-12-06 19:11:26.161809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.358 [2024-12-06 19:11:26.161821] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.358 [2024-12-06 19:11:26.161830] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.358 [2024-12-06 19:11:26.163438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.358 [2024-12-06 19:11:26.163495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.358 [2024-12-06 19:11:26.163559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.358 [2024-12-06 19:11:26.163562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.358 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.358 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:41.358 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:41.358 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:41.358 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:41.358 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.358 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:41.358 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:41.358 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:41.616 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:41.616 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:41.616 "nvmf_tgt_1" 00:12:41.616 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:41.873 "nvmf_tgt_2" 00:12:41.873 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:41.873 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:41.873 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:41.873 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:41.873 true 00:12:41.873 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:42.132 true 00:12:42.132 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:42.132 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:42.132 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:42.132 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:42.132 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:42.132 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:42.132 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:42.132 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:42.132 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:42.132 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.132 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:42.132 rmmod nvme_tcp 00:12:42.132 rmmod nvme_fabrics 00:12:42.132 rmmod nvme_keyring 00:12:42.390 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:42.390 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:42.390 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:42.390 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 167383 ']' 00:12:42.390 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 167383 00:12:42.390 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 167383 ']' 00:12:42.390 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 167383 00:12:42.390 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:42.390 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.390 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 167383 00:12:42.390 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.390 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.390 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 167383' 00:12:42.390 killing process with pid 167383 00:12:42.390 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 167383 00:12:42.390 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 167383 00:12:42.650 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:42.650 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:42.650 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:42.650 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:42.650 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:42.650 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:42.650 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:42.650 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:42.650 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:42.650 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.650 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.650 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.558 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:44.558 00:12:44.558 real 0m6.010s 00:12:44.558 user 0m6.772s 00:12:44.558 sys 0m2.105s 00:12:44.558 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.558 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:44.558 ************************************ 00:12:44.558 END TEST nvmf_multitarget 00:12:44.558 ************************************ 00:12:44.558 19:11:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:44.558 19:11:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:44.558 19:11:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.558 19:11:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:44.558 ************************************ 00:12:44.558 START TEST nvmf_rpc 00:12:44.558 ************************************ 00:12:44.558 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:44.558 * Looking for test storage... 00:12:44.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.558 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:44.558 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:44.558 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:44.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.818 --rc genhtml_branch_coverage=1 00:12:44.818 --rc genhtml_function_coverage=1 00:12:44.818 --rc genhtml_legend=1 00:12:44.818 --rc geninfo_all_blocks=1 00:12:44.818 --rc geninfo_unexecuted_blocks=1 00:12:44.818 00:12:44.818 ' 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:44.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.818 --rc genhtml_branch_coverage=1 00:12:44.818 --rc genhtml_function_coverage=1 00:12:44.818 --rc genhtml_legend=1 00:12:44.818 --rc geninfo_all_blocks=1 00:12:44.818 --rc geninfo_unexecuted_blocks=1 00:12:44.818 00:12:44.818 ' 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:44.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.818 --rc genhtml_branch_coverage=1 00:12:44.818 --rc genhtml_function_coverage=1 00:12:44.818 --rc genhtml_legend=1 00:12:44.818 --rc geninfo_all_blocks=1 00:12:44.818 --rc geninfo_unexecuted_blocks=1 00:12:44.818 00:12:44.818 ' 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:44.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.818 --rc genhtml_branch_coverage=1 00:12:44.818 --rc genhtml_function_coverage=1 00:12:44.818 --rc genhtml_legend=1 00:12:44.818 --rc geninfo_all_blocks=1 00:12:44.818 --rc geninfo_unexecuted_blocks=1 00:12:44.818 00:12:44.818 ' 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.818 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:44.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:44.819 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.363 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:47.364 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:47.364 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:47.364 Found net devices under 0000:84:00.0: cvl_0_0 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:47.364 Found net devices under 0000:84:00.1: cvl_0_1 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:47.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:12:47.364 00:12:47.364 --- 10.0.0.2 ping statistics --- 00:12:47.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.364 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:12:47.364 00:12:47.364 --- 10.0.0.1 ping statistics --- 00:12:47.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.364 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:47.364 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:47.364 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:47.364 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:47.364 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:47.364 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.364 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=169503 00:12:47.364 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:47.364 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 169503 00:12:47.364 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 169503 ']' 00:12:47.364 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.364 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.364 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.364 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.364 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.364 [2024-12-06 19:11:32.066985] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:12:47.364 [2024-12-06 19:11:32.067116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.364 [2024-12-06 19:11:32.141596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.364 [2024-12-06 19:11:32.200571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.365 [2024-12-06 19:11:32.200624] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.365 [2024-12-06 19:11:32.200653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.365 [2024-12-06 19:11:32.200665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.365 [2024-12-06 19:11:32.200675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.365 [2024-12-06 19:11:32.202263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.365 [2024-12-06 19:11:32.202322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.365 [2024-12-06 19:11:32.202389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.365 [2024-12-06 19:11:32.202391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.365 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.365 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:47.365 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:47.365 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:47.365 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.365 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.365 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:47.365 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.365 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.365 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.365 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:47.365 "tick_rate": 2700000000, 00:12:47.365 "poll_groups": [ 00:12:47.365 { 00:12:47.365 "name": "nvmf_tgt_poll_group_000", 00:12:47.365 "admin_qpairs": 0, 00:12:47.365 "io_qpairs": 0, 00:12:47.365 "current_admin_qpairs": 0, 00:12:47.365 "current_io_qpairs": 0, 00:12:47.365 "pending_bdev_io": 0, 00:12:47.365 "completed_nvme_io": 0, 00:12:47.365 "transports": [] 00:12:47.365 }, 00:12:47.365 { 00:12:47.365 "name": "nvmf_tgt_poll_group_001", 00:12:47.365 "admin_qpairs": 0, 00:12:47.365 "io_qpairs": 0, 00:12:47.365 "current_admin_qpairs": 0, 00:12:47.365 "current_io_qpairs": 0, 00:12:47.365 "pending_bdev_io": 0, 00:12:47.365 "completed_nvme_io": 0, 00:12:47.365 "transports": [] 00:12:47.365 }, 00:12:47.365 { 00:12:47.365 "name": "nvmf_tgt_poll_group_002", 00:12:47.365 "admin_qpairs": 0, 00:12:47.365 "io_qpairs": 0, 00:12:47.365 "current_admin_qpairs": 0, 00:12:47.365 "current_io_qpairs": 0, 00:12:47.365 "pending_bdev_io": 0, 00:12:47.365 "completed_nvme_io": 0, 00:12:47.365 "transports": [] 00:12:47.365 }, 00:12:47.365 { 00:12:47.365 "name": "nvmf_tgt_poll_group_003", 00:12:47.365 "admin_qpairs": 0, 00:12:47.365 "io_qpairs": 0, 00:12:47.365 "current_admin_qpairs": 0, 00:12:47.365 "current_io_qpairs": 0, 00:12:47.365 "pending_bdev_io": 0, 00:12:47.365 "completed_nvme_io": 0, 00:12:47.365 "transports": [] 00:12:47.365 } 00:12:47.365 ] 00:12:47.365 }' 00:12:47.365 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:47.365 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:47.365 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:47.365 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:47.365 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:47.365 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:47.623 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:47.623 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:47.623 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.623 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.623 [2024-12-06 19:11:32.442894] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.623 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.623 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:47.623 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.623 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.623 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.623 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:47.623 "tick_rate": 2700000000, 00:12:47.623 "poll_groups": [ 00:12:47.623 { 00:12:47.623 "name": "nvmf_tgt_poll_group_000", 00:12:47.623 "admin_qpairs": 0, 00:12:47.623 "io_qpairs": 0, 00:12:47.623 "current_admin_qpairs": 0, 00:12:47.623 "current_io_qpairs": 0, 00:12:47.623 "pending_bdev_io": 0, 00:12:47.623 "completed_nvme_io": 0, 00:12:47.623 "transports": [ 00:12:47.623 { 00:12:47.623 "trtype": "TCP" 00:12:47.623 } 00:12:47.623 ] 00:12:47.623 }, 00:12:47.623 { 00:12:47.623 "name": "nvmf_tgt_poll_group_001", 00:12:47.623 "admin_qpairs": 0, 00:12:47.623 "io_qpairs": 0, 00:12:47.623 "current_admin_qpairs": 0, 00:12:47.623 "current_io_qpairs": 0, 00:12:47.623 "pending_bdev_io": 0, 00:12:47.623 "completed_nvme_io": 0, 00:12:47.623 "transports": [ 00:12:47.623 { 00:12:47.623 "trtype": "TCP" 00:12:47.623 } 00:12:47.623 ] 00:12:47.623 }, 00:12:47.623 { 00:12:47.623 "name": "nvmf_tgt_poll_group_002", 00:12:47.623 "admin_qpairs": 0, 00:12:47.623 "io_qpairs": 0, 00:12:47.623 "current_admin_qpairs": 0, 00:12:47.623 "current_io_qpairs": 0, 00:12:47.623 "pending_bdev_io": 0, 00:12:47.623 "completed_nvme_io": 0, 00:12:47.623 "transports": [ 00:12:47.623 { 00:12:47.623 "trtype": "TCP" 00:12:47.623 } 00:12:47.623 ] 00:12:47.623 }, 00:12:47.624 { 00:12:47.624 "name": "nvmf_tgt_poll_group_003", 00:12:47.624 "admin_qpairs": 0, 00:12:47.624 "io_qpairs": 0, 00:12:47.624 "current_admin_qpairs": 0, 00:12:47.624 "current_io_qpairs": 0, 00:12:47.624 "pending_bdev_io": 0, 00:12:47.624 "completed_nvme_io": 0, 00:12:47.624 "transports": [ 00:12:47.624 { 00:12:47.624 "trtype": "TCP" 00:12:47.624 } 00:12:47.624 ] 00:12:47.624 } 00:12:47.624 ] 00:12:47.624 }' 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.624 Malloc1 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.624 [2024-12-06 19:11:32.612951] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:12:47.624 [2024-12-06 19:11:32.635519] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:12:47.624 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:47.624 could not add new controller: failed to write to nvme-fabrics device 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.624 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.881 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.881 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.448 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.448 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:48.448 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.448 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:48.448 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:50.349 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:50.349 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:50.349 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.349 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:50.349 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.349 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:50.349 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.349 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.349 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:50.349 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:50.349 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.349 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:50.349 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.349 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:50.349 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:50.349 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.349 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.606 [2024-12-06 19:11:35.417234] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:12:50.606 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:50.606 could not add new controller: failed to write to nvme-fabrics device 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.606 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.172 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.172 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:51.172 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.172 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:51.172 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:53.072 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:53.072 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:53.072 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.072 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:53.072 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.072 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:53.072 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.332 [2024-12-06 19:11:38.237967] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.332 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.900 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.900 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:53.900 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.900 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:53.900 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:56.432 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:56.432 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:56.432 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.433 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:56.433 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.433 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:56.433 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.433 [2024-12-06 19:11:41.053930] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.433 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.691 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.691 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:56.691 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.691 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:56.691 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.224 [2024-12-06 19:11:43.803852] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.224 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.483 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.483 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:59.483 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.483 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:59.483 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.010 [2024-12-06 19:11:46.684790] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.010 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.574 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.574 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:02.574 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.574 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:02.574 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.471 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.729 [2024-12-06 19:11:49.534465] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.729 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.296 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.296 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:05.296 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.296 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:05.296 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:07.219 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.478 [2024-12-06 19:11:52.279595] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.478 [2024-12-06 19:11:52.327628] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 [2024-12-06 19:11:52.375848] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 [2024-12-06 19:11:52.424026] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 [2024-12-06 19:11:52.472204] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:07.479 "tick_rate": 2700000000, 00:13:07.479 "poll_groups": [ 00:13:07.479 { 00:13:07.479 "name": "nvmf_tgt_poll_group_000", 00:13:07.479 "admin_qpairs": 2, 00:13:07.479 "io_qpairs": 84, 00:13:07.479 "current_admin_qpairs": 0, 00:13:07.479 "current_io_qpairs": 0, 00:13:07.479 "pending_bdev_io": 0, 00:13:07.479 "completed_nvme_io": 179, 00:13:07.479 "transports": [ 00:13:07.479 { 00:13:07.479 "trtype": "TCP" 00:13:07.479 } 00:13:07.479 ] 00:13:07.479 }, 00:13:07.479 { 00:13:07.479 "name": "nvmf_tgt_poll_group_001", 00:13:07.479 "admin_qpairs": 2, 00:13:07.479 "io_qpairs": 84, 00:13:07.479 "current_admin_qpairs": 0, 00:13:07.479 "current_io_qpairs": 0, 00:13:07.479 "pending_bdev_io": 0, 00:13:07.479 "completed_nvme_io": 135, 00:13:07.479 "transports": [ 00:13:07.479 { 00:13:07.479 "trtype": "TCP" 00:13:07.479 } 00:13:07.479 ] 00:13:07.479 }, 00:13:07.479 { 00:13:07.479 "name": "nvmf_tgt_poll_group_002", 00:13:07.479 "admin_qpairs": 1, 00:13:07.479 "io_qpairs": 84, 00:13:07.479 "current_admin_qpairs": 0, 00:13:07.479 "current_io_qpairs": 0, 00:13:07.479 "pending_bdev_io": 0, 00:13:07.479 "completed_nvme_io": 233, 00:13:07.479 "transports": [ 00:13:07.479 { 00:13:07.479 "trtype": "TCP" 00:13:07.479 } 00:13:07.479 ] 00:13:07.479 }, 00:13:07.479 { 00:13:07.479 "name": "nvmf_tgt_poll_group_003", 00:13:07.479 "admin_qpairs": 2, 00:13:07.479 "io_qpairs": 84, 00:13:07.479 "current_admin_qpairs": 0, 00:13:07.479 "current_io_qpairs": 0, 00:13:07.479 "pending_bdev_io": 0, 00:13:07.479 "completed_nvme_io": 139, 00:13:07.479 "transports": [ 00:13:07.479 { 00:13:07.479 "trtype": "TCP" 00:13:07.479 } 00:13:07.479 ] 00:13:07.479 } 00:13:07.479 ] 00:13:07.479 }' 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:07.738 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:07.739 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:07.739 rmmod nvme_tcp 00:13:07.739 rmmod nvme_fabrics 00:13:07.739 rmmod nvme_keyring 00:13:07.739 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:07.739 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:07.739 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:07.739 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 169503 ']' 00:13:07.739 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 169503 00:13:07.739 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 169503 ']' 00:13:07.739 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 169503 00:13:07.739 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:07.739 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.739 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 169503 00:13:07.739 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.739 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.739 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 169503' 00:13:07.739 killing process with pid 169503 00:13:07.739 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 169503 00:13:07.739 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 169503 00:13:07.998 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:07.998 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:07.998 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:07.998 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:07.998 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:07.998 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:07.998 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:07.998 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:07.998 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:07.998 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.998 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.998 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.542 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:10.542 00:13:10.542 real 0m25.467s 00:13:10.542 user 1m22.293s 00:13:10.542 sys 0m4.425s 00:13:10.542 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.542 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.542 ************************************ 00:13:10.542 END TEST nvmf_rpc 00:13:10.542 ************************************ 00:13:10.542 19:11:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:10.542 19:11:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:10.542 19:11:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.542 19:11:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:10.542 ************************************ 00:13:10.542 START TEST nvmf_invalid 00:13:10.542 ************************************ 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:10.543 * Looking for test storage... 00:13:10.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:10.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.543 --rc genhtml_branch_coverage=1 00:13:10.543 --rc genhtml_function_coverage=1 00:13:10.543 --rc genhtml_legend=1 00:13:10.543 --rc geninfo_all_blocks=1 00:13:10.543 --rc geninfo_unexecuted_blocks=1 00:13:10.543 00:13:10.543 ' 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:10.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.543 --rc genhtml_branch_coverage=1 00:13:10.543 --rc genhtml_function_coverage=1 00:13:10.543 --rc genhtml_legend=1 00:13:10.543 --rc geninfo_all_blocks=1 00:13:10.543 --rc geninfo_unexecuted_blocks=1 00:13:10.543 00:13:10.543 ' 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:10.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.543 --rc genhtml_branch_coverage=1 00:13:10.543 --rc genhtml_function_coverage=1 00:13:10.543 --rc genhtml_legend=1 00:13:10.543 --rc geninfo_all_blocks=1 00:13:10.543 --rc geninfo_unexecuted_blocks=1 00:13:10.543 00:13:10.543 ' 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:10.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.543 --rc genhtml_branch_coverage=1 00:13:10.543 --rc genhtml_function_coverage=1 00:13:10.543 --rc genhtml_legend=1 00:13:10.543 --rc geninfo_all_blocks=1 00:13:10.543 --rc geninfo_unexecuted_blocks=1 00:13:10.543 00:13:10.543 ' 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:10.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:10.543 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:10.544 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:12.454 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:12.454 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.454 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:12.454 Found net devices under 0000:84:00.0: cvl_0_0 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:12.455 Found net devices under 0000:84:00.1: cvl_0_1 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.455 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:12.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:13:12.714 00:13:12.714 --- 10.0.0.2 ping statistics --- 00:13:12.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.714 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:13:12.714 00:13:12.714 --- 10.0.0.1 ping statistics --- 00:13:12.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.714 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=174141 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 174141 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 174141 ']' 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.714 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:12.714 [2024-12-06 19:11:57.639022] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:13:12.714 [2024-12-06 19:11:57.639116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.714 [2024-12-06 19:11:57.714506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.973 [2024-12-06 19:11:57.775390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.973 [2024-12-06 19:11:57.775448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.973 [2024-12-06 19:11:57.775462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.973 [2024-12-06 19:11:57.775473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.973 [2024-12-06 19:11:57.775483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.973 [2024-12-06 19:11:57.777301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.973 [2024-12-06 19:11:57.777390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.973 [2024-12-06 19:11:57.777326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.973 [2024-12-06 19:11:57.777394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.973 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.973 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:12.973 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:12.973 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:12.973 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:12.973 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.973 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:12.973 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6049 00:13:13.231 [2024-12-06 19:11:58.233932] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:13.231 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:13.231 { 00:13:13.231 "nqn": "nqn.2016-06.io.spdk:cnode6049", 00:13:13.231 "tgt_name": "foobar", 00:13:13.231 "method": "nvmf_create_subsystem", 00:13:13.231 "req_id": 1 00:13:13.231 } 00:13:13.231 Got JSON-RPC error response 00:13:13.231 response: 00:13:13.231 { 00:13:13.231 "code": -32603, 00:13:13.231 "message": "Unable to find target foobar" 00:13:13.231 }' 00:13:13.231 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:13.231 { 00:13:13.231 "nqn": "nqn.2016-06.io.spdk:cnode6049", 00:13:13.231 "tgt_name": "foobar", 00:13:13.231 "method": "nvmf_create_subsystem", 00:13:13.231 "req_id": 1 00:13:13.231 } 00:13:13.231 Got JSON-RPC error response 00:13:13.231 response: 00:13:13.231 { 00:13:13.231 "code": -32603, 00:13:13.231 "message": "Unable to find target foobar" 00:13:13.231 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:13.231 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:13.231 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode725 00:13:13.797 [2024-12-06 19:11:58.546974] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode725: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:13.797 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:13.797 { 00:13:13.797 "nqn": "nqn.2016-06.io.spdk:cnode725", 00:13:13.797 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:13.797 "method": "nvmf_create_subsystem", 00:13:13.797 "req_id": 1 00:13:13.797 } 00:13:13.797 Got JSON-RPC error response 00:13:13.797 response: 00:13:13.797 { 00:13:13.797 "code": -32602, 00:13:13.797 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:13.797 }' 00:13:13.797 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:13.797 { 00:13:13.797 "nqn": "nqn.2016-06.io.spdk:cnode725", 00:13:13.797 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:13.797 "method": "nvmf_create_subsystem", 00:13:13.797 "req_id": 1 00:13:13.797 } 00:13:13.797 Got JSON-RPC error response 00:13:13.797 response: 00:13:13.797 { 00:13:13.797 "code": -32602, 00:13:13.797 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:13.797 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:13.797 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:13.797 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode720 00:13:13.797 [2024-12-06 19:11:58.815890] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode720: invalid model number 'SPDK_Controller' 00:13:13.797 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:13.797 { 00:13:13.797 "nqn": "nqn.2016-06.io.spdk:cnode720", 00:13:13.797 "model_number": "SPDK_Controller\u001f", 00:13:13.797 "method": "nvmf_create_subsystem", 00:13:13.797 "req_id": 1 00:13:13.797 } 00:13:13.797 Got JSON-RPC error response 00:13:13.797 response: 00:13:13.797 { 00:13:13.797 "code": -32602, 00:13:13.797 "message": "Invalid MN SPDK_Controller\u001f" 00:13:13.797 }' 00:13:13.797 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:13.797 { 00:13:13.797 "nqn": "nqn.2016-06.io.spdk:cnode720", 00:13:13.797 "model_number": "SPDK_Controller\u001f", 00:13:13.797 "method": "nvmf_create_subsystem", 00:13:13.797 "req_id": 1 00:13:13.797 } 00:13:13.797 Got JSON-RPC error response 00:13:13.797 response: 00:13:13.797 { 00:13:13.797 "code": -32602, 00:13:13.797 "message": "Invalid MN SPDK_Controller\u001f" 00:13:13.797 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:13.797 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:13.797 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:13.797 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:13.797 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:13.797 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:13.797 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:13.797 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:13.797 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:13.797 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:13.797 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.056 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ + == \- ]] 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '+_.![AM8$6W[^Dk:ewX)0' 00:13:14.057 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '+_.![AM8$6W[^Dk:ewX)0' nqn.2016-06.io.spdk:cnode24517 00:13:14.318 [2024-12-06 19:11:59.164985] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24517: invalid serial number '+_.![AM8$6W[^Dk:ewX)0' 00:13:14.318 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:14.318 { 00:13:14.318 "nqn": "nqn.2016-06.io.spdk:cnode24517", 00:13:14.318 "serial_number": "+_.![AM8$6W[^Dk:ewX)0", 00:13:14.318 "method": "nvmf_create_subsystem", 00:13:14.318 "req_id": 1 00:13:14.318 } 00:13:14.318 Got JSON-RPC error response 00:13:14.318 response: 00:13:14.318 { 00:13:14.318 "code": -32602, 00:13:14.318 "message": "Invalid SN +_.![AM8$6W[^Dk:ewX)0" 00:13:14.318 }' 00:13:14.318 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:14.318 { 00:13:14.318 "nqn": "nqn.2016-06.io.spdk:cnode24517", 00:13:14.318 "serial_number": "+_.![AM8$6W[^Dk:ewX)0", 00:13:14.318 "method": "nvmf_create_subsystem", 00:13:14.318 "req_id": 1 00:13:14.318 } 00:13:14.318 Got JSON-RPC error response 00:13:14.318 response: 00:13:14.318 { 00:13:14.318 "code": -32602, 00:13:14.318 "message": "Invalid SN +_.![AM8$6W[^Dk:ewX)0" 00:13:14.318 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:14.318 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:14.318 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:14.318 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:14.318 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:14.318 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:14.318 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:14.318 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.318 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:14.318 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:14.319 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ l == \- ]] 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'lgoV p]0t-2l&;j7lxcC~fQZk@#TYQHEKa7b:A*+P' 00:13:14.320 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'lgoV p]0t-2l&;j7lxcC~fQZk@#TYQHEKa7b:A*+P' nqn.2016-06.io.spdk:cnode31705 00:13:14.579 [2024-12-06 19:11:59.606444] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31705: invalid model number 'lgoV p]0t-2l&;j7lxcC~fQZk@#TYQHEKa7b:A*+P' 00:13:14.838 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:14.838 { 00:13:14.838 "nqn": "nqn.2016-06.io.spdk:cnode31705", 00:13:14.838 "model_number": "lgoV p]0t-2l&;j7lxcC~fQZk@#TYQHEKa7b:A*+P", 00:13:14.838 "method": "nvmf_create_subsystem", 00:13:14.838 "req_id": 1 00:13:14.838 } 00:13:14.838 Got JSON-RPC error response 00:13:14.838 response: 00:13:14.838 { 00:13:14.838 "code": -32602, 00:13:14.838 "message": "Invalid MN lgoV p]0t-2l&;j7lxcC~fQZk@#TYQHEKa7b:A*+P" 00:13:14.838 }' 00:13:14.838 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:14.838 { 00:13:14.838 "nqn": "nqn.2016-06.io.spdk:cnode31705", 00:13:14.838 "model_number": "lgoV p]0t-2l&;j7lxcC~fQZk@#TYQHEKa7b:A*+P", 00:13:14.838 "method": "nvmf_create_subsystem", 00:13:14.838 "req_id": 1 00:13:14.838 } 00:13:14.838 Got JSON-RPC error response 00:13:14.838 response: 00:13:14.838 { 00:13:14.838 "code": -32602, 00:13:14.838 "message": "Invalid MN lgoV p]0t-2l&;j7lxcC~fQZk@#TYQHEKa7b:A*+P" 00:13:14.838 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:14.838 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:15.096 [2024-12-06 19:11:59.887465] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.097 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:15.354 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:15.354 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:15.355 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:15.355 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:15.355 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:15.613 [2024-12-06 19:12:00.453340] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:15.613 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:15.613 { 00:13:15.613 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:15.613 "listen_address": { 00:13:15.613 "trtype": "tcp", 00:13:15.613 "traddr": "", 00:13:15.613 "trsvcid": "4421" 00:13:15.613 }, 00:13:15.613 "method": "nvmf_subsystem_remove_listener", 00:13:15.613 "req_id": 1 00:13:15.613 } 00:13:15.613 Got JSON-RPC error response 00:13:15.613 response: 00:13:15.613 { 00:13:15.613 "code": -32602, 00:13:15.613 "message": "Invalid parameters" 00:13:15.613 }' 00:13:15.613 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:15.613 { 00:13:15.613 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:15.613 "listen_address": { 00:13:15.613 "trtype": "tcp", 00:13:15.613 "traddr": "", 00:13:15.613 "trsvcid": "4421" 00:13:15.613 }, 00:13:15.613 "method": "nvmf_subsystem_remove_listener", 00:13:15.613 "req_id": 1 00:13:15.613 } 00:13:15.613 Got JSON-RPC error response 00:13:15.613 response: 00:13:15.613 { 00:13:15.613 "code": -32602, 00:13:15.613 "message": "Invalid parameters" 00:13:15.613 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:15.613 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23469 -i 0 00:13:15.870 [2024-12-06 19:12:00.742282] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23469: invalid cntlid range [0-65519] 00:13:15.870 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:15.870 { 00:13:15.870 "nqn": "nqn.2016-06.io.spdk:cnode23469", 00:13:15.870 "min_cntlid": 0, 00:13:15.870 "method": "nvmf_create_subsystem", 00:13:15.870 "req_id": 1 00:13:15.870 } 00:13:15.870 Got JSON-RPC error response 00:13:15.870 response: 00:13:15.870 { 00:13:15.870 "code": -32602, 00:13:15.870 "message": "Invalid cntlid range [0-65519]" 00:13:15.870 }' 00:13:15.870 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:15.870 { 00:13:15.870 "nqn": "nqn.2016-06.io.spdk:cnode23469", 00:13:15.870 "min_cntlid": 0, 00:13:15.870 "method": "nvmf_create_subsystem", 00:13:15.870 "req_id": 1 00:13:15.870 } 00:13:15.870 Got JSON-RPC error response 00:13:15.870 response: 00:13:15.870 { 00:13:15.870 "code": -32602, 00:13:15.870 "message": "Invalid cntlid range [0-65519]" 00:13:15.870 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:15.870 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27253 -i 65520 00:13:16.127 [2024-12-06 19:12:01.027327] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27253: invalid cntlid range [65520-65519] 00:13:16.127 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:16.127 { 00:13:16.127 "nqn": "nqn.2016-06.io.spdk:cnode27253", 00:13:16.127 "min_cntlid": 65520, 00:13:16.127 "method": "nvmf_create_subsystem", 00:13:16.127 "req_id": 1 00:13:16.127 } 00:13:16.128 Got JSON-RPC error response 00:13:16.128 response: 00:13:16.128 { 00:13:16.128 "code": -32602, 00:13:16.128 "message": "Invalid cntlid range [65520-65519]" 00:13:16.128 }' 00:13:16.128 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:16.128 { 00:13:16.128 "nqn": "nqn.2016-06.io.spdk:cnode27253", 00:13:16.128 "min_cntlid": 65520, 00:13:16.128 "method": "nvmf_create_subsystem", 00:13:16.128 "req_id": 1 00:13:16.128 } 00:13:16.128 Got JSON-RPC error response 00:13:16.128 response: 00:13:16.128 { 00:13:16.128 "code": -32602, 00:13:16.128 "message": "Invalid cntlid range [65520-65519]" 00:13:16.128 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:16.128 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6336 -I 0 00:13:16.385 [2024-12-06 19:12:01.308220] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6336: invalid cntlid range [1-0] 00:13:16.385 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:16.385 { 00:13:16.385 "nqn": "nqn.2016-06.io.spdk:cnode6336", 00:13:16.385 "max_cntlid": 0, 00:13:16.385 "method": "nvmf_create_subsystem", 00:13:16.385 "req_id": 1 00:13:16.385 } 00:13:16.385 Got JSON-RPC error response 00:13:16.385 response: 00:13:16.385 { 00:13:16.385 "code": -32602, 00:13:16.385 "message": "Invalid cntlid range [1-0]" 00:13:16.385 }' 00:13:16.386 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:16.386 { 00:13:16.386 "nqn": "nqn.2016-06.io.spdk:cnode6336", 00:13:16.386 "max_cntlid": 0, 00:13:16.386 "method": "nvmf_create_subsystem", 00:13:16.386 "req_id": 1 00:13:16.386 } 00:13:16.386 Got JSON-RPC error response 00:13:16.386 response: 00:13:16.386 { 00:13:16.386 "code": -32602, 00:13:16.386 "message": "Invalid cntlid range [1-0]" 00:13:16.386 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:16.386 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21248 -I 65520 00:13:16.643 [2024-12-06 19:12:01.585180] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21248: invalid cntlid range [1-65520] 00:13:16.643 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:16.643 { 00:13:16.643 "nqn": "nqn.2016-06.io.spdk:cnode21248", 00:13:16.643 "max_cntlid": 65520, 00:13:16.643 "method": "nvmf_create_subsystem", 00:13:16.643 "req_id": 1 00:13:16.643 } 00:13:16.643 Got JSON-RPC error response 00:13:16.643 response: 00:13:16.643 { 00:13:16.643 "code": -32602, 00:13:16.643 "message": "Invalid cntlid range [1-65520]" 00:13:16.643 }' 00:13:16.643 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:16.643 { 00:13:16.643 "nqn": "nqn.2016-06.io.spdk:cnode21248", 00:13:16.643 "max_cntlid": 65520, 00:13:16.643 "method": "nvmf_create_subsystem", 00:13:16.643 "req_id": 1 00:13:16.643 } 00:13:16.643 Got JSON-RPC error response 00:13:16.643 response: 00:13:16.643 { 00:13:16.643 "code": -32602, 00:13:16.643 "message": "Invalid cntlid range [1-65520]" 00:13:16.643 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:16.643 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27259 -i 6 -I 5 00:13:16.900 [2024-12-06 19:12:01.878156] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27259: invalid cntlid range [6-5] 00:13:16.900 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:16.900 { 00:13:16.900 "nqn": "nqn.2016-06.io.spdk:cnode27259", 00:13:16.900 "min_cntlid": 6, 00:13:16.900 "max_cntlid": 5, 00:13:16.900 "method": "nvmf_create_subsystem", 00:13:16.900 "req_id": 1 00:13:16.900 } 00:13:16.900 Got JSON-RPC error response 00:13:16.900 response: 00:13:16.900 { 00:13:16.900 "code": -32602, 00:13:16.900 "message": "Invalid cntlid range [6-5]" 00:13:16.900 }' 00:13:16.900 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:16.900 { 00:13:16.900 "nqn": "nqn.2016-06.io.spdk:cnode27259", 00:13:16.900 "min_cntlid": 6, 00:13:16.900 "max_cntlid": 5, 00:13:16.900 "method": "nvmf_create_subsystem", 00:13:16.900 "req_id": 1 00:13:16.900 } 00:13:16.900 Got JSON-RPC error response 00:13:16.900 response: 00:13:16.900 { 00:13:16.900 "code": -32602, 00:13:16.900 "message": "Invalid cntlid range [6-5]" 00:13:16.900 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:16.900 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:17.159 { 00:13:17.159 "name": "foobar", 00:13:17.159 "method": "nvmf_delete_target", 00:13:17.159 "req_id": 1 00:13:17.159 } 00:13:17.159 Got JSON-RPC error response 00:13:17.159 response: 00:13:17.159 { 00:13:17.159 "code": -32602, 00:13:17.159 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:17.159 }' 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:17.159 { 00:13:17.159 "name": "foobar", 00:13:17.159 "method": "nvmf_delete_target", 00:13:17.159 "req_id": 1 00:13:17.159 } 00:13:17.159 Got JSON-RPC error response 00:13:17.159 response: 00:13:17.159 { 00:13:17.159 "code": -32602, 00:13:17.159 "message": "The specified target doesn't exist, cannot delete it." 00:13:17.159 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:17.159 rmmod nvme_tcp 00:13:17.159 rmmod nvme_fabrics 00:13:17.159 rmmod nvme_keyring 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 174141 ']' 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 174141 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 174141 ']' 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 174141 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 174141 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 174141' 00:13:17.159 killing process with pid 174141 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 174141 00:13:17.159 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 174141 00:13:17.417 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:17.417 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:17.417 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:17.417 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:17.417 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:17.417 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:17.417 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:17.417 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:17.417 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:17.417 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.417 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.417 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.952 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:19.952 00:13:19.952 real 0m9.329s 00:13:19.952 user 0m22.485s 00:13:19.952 sys 0m2.592s 00:13:19.952 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.952 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:19.952 ************************************ 00:13:19.952 END TEST nvmf_invalid 00:13:19.952 ************************************ 00:13:19.952 19:12:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:19.952 19:12:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:19.952 19:12:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.952 19:12:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:19.952 ************************************ 00:13:19.952 START TEST nvmf_connect_stress 00:13:19.952 ************************************ 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:19.953 * Looking for test storage... 00:13:19.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:19.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.953 --rc genhtml_branch_coverage=1 00:13:19.953 --rc genhtml_function_coverage=1 00:13:19.953 --rc genhtml_legend=1 00:13:19.953 --rc geninfo_all_blocks=1 00:13:19.953 --rc geninfo_unexecuted_blocks=1 00:13:19.953 00:13:19.953 ' 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:19.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.953 --rc genhtml_branch_coverage=1 00:13:19.953 --rc genhtml_function_coverage=1 00:13:19.953 --rc genhtml_legend=1 00:13:19.953 --rc geninfo_all_blocks=1 00:13:19.953 --rc geninfo_unexecuted_blocks=1 00:13:19.953 00:13:19.953 ' 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:19.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.953 --rc genhtml_branch_coverage=1 00:13:19.953 --rc genhtml_function_coverage=1 00:13:19.953 --rc genhtml_legend=1 00:13:19.953 --rc geninfo_all_blocks=1 00:13:19.953 --rc geninfo_unexecuted_blocks=1 00:13:19.953 00:13:19.953 ' 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:19.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.953 --rc genhtml_branch_coverage=1 00:13:19.953 --rc genhtml_function_coverage=1 00:13:19.953 --rc genhtml_legend=1 00:13:19.953 --rc geninfo_all_blocks=1 00:13:19.953 --rc geninfo_unexecuted_blocks=1 00:13:19.953 00:13:19.953 ' 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.953 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:19.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:19.954 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:21.865 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:21.865 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:21.865 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:21.866 Found net devices under 0000:84:00.0: cvl_0_0 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:21.866 Found net devices under 0000:84:00.1: cvl_0_1 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:21.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:13:21.866 00:13:21.866 --- 10.0.0.2 ping statistics --- 00:13:21.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.866 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:13:21.866 00:13:21.866 --- 10.0.0.1 ping statistics --- 00:13:21.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.866 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=176912 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 176912 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 176912 ']' 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.866 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.866 [2024-12-06 19:12:06.895809] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:13:21.866 [2024-12-06 19:12:06.895894] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.125 [2024-12-06 19:12:06.994064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:22.125 [2024-12-06 19:12:07.069571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.125 [2024-12-06 19:12:07.069633] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.125 [2024-12-06 19:12:07.069657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.125 [2024-12-06 19:12:07.069681] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.125 [2024-12-06 19:12:07.069716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.125 [2024-12-06 19:12:07.071817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.125 [2024-12-06 19:12:07.071875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.125 [2024-12-06 19:12:07.071886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.384 [2024-12-06 19:12:07.296909] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.384 [2024-12-06 19:12:07.314348] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.384 NULL1 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=176944 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.384 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.385 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.952 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.952 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:22.952 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.952 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.952 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.210 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.210 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:23.210 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.210 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.210 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.469 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.469 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:23.469 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.469 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.469 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.727 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.727 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:23.727 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.727 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.727 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.985 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.985 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:23.985 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.985 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.985 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.552 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.552 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:24.552 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.552 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.552 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.811 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.811 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:24.811 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.811 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.811 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.070 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.070 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:25.070 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.070 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.070 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.328 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.328 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:25.328 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.328 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.328 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.586 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.586 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:25.586 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.587 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.587 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.154 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.154 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:26.154 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.154 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.154 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.413 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.413 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:26.413 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.413 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.413 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.671 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.671 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:26.671 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.671 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.671 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.929 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.929 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:26.929 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.929 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.929 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.187 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.187 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:27.187 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.187 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.187 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.753 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.753 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:27.753 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.753 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.753 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.011 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.011 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:28.011 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.011 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.011 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.268 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.268 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:28.268 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.269 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.269 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.527 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.527 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:28.527 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.527 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.527 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.784 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.784 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:28.784 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.784 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.784 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.350 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.350 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:29.350 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.350 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.350 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.608 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.608 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:29.608 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.608 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.608 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.867 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.867 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:29.867 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.867 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.867 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.125 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.125 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:30.125 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.125 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.125 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.383 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.383 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:30.383 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.383 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.383 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.949 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.949 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:30.949 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.949 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.949 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.206 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.206 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:31.206 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.206 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.207 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.464 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.464 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:31.464 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.464 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.464 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.722 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.722 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:31.722 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.722 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.722 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.979 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.979 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:31.979 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.979 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.979 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.544 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.545 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:32.545 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.545 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.545 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.545 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 176944 00:13:32.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (176944) - No such process 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 176944 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:32.803 rmmod nvme_tcp 00:13:32.803 rmmod nvme_fabrics 00:13:32.803 rmmod nvme_keyring 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 176912 ']' 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 176912 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 176912 ']' 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 176912 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 176912 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 176912' 00:13:32.803 killing process with pid 176912 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 176912 00:13:32.803 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 176912 00:13:33.063 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:33.063 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:33.063 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:33.063 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:33.063 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:33.063 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:33.063 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:33.063 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:33.063 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:33.063 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.063 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.063 19:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:35.602 00:13:35.602 real 0m15.595s 00:13:35.602 user 0m40.113s 00:13:35.602 sys 0m4.850s 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.602 ************************************ 00:13:35.602 END TEST nvmf_connect_stress 00:13:35.602 ************************************ 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:35.602 ************************************ 00:13:35.602 START TEST nvmf_fused_ordering 00:13:35.602 ************************************ 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:35.602 * Looking for test storage... 00:13:35.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.602 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:35.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.603 --rc genhtml_branch_coverage=1 00:13:35.603 --rc genhtml_function_coverage=1 00:13:35.603 --rc genhtml_legend=1 00:13:35.603 --rc geninfo_all_blocks=1 00:13:35.603 --rc geninfo_unexecuted_blocks=1 00:13:35.603 00:13:35.603 ' 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:35.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.603 --rc genhtml_branch_coverage=1 00:13:35.603 --rc genhtml_function_coverage=1 00:13:35.603 --rc genhtml_legend=1 00:13:35.603 --rc geninfo_all_blocks=1 00:13:35.603 --rc geninfo_unexecuted_blocks=1 00:13:35.603 00:13:35.603 ' 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:35.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.603 --rc genhtml_branch_coverage=1 00:13:35.603 --rc genhtml_function_coverage=1 00:13:35.603 --rc genhtml_legend=1 00:13:35.603 --rc geninfo_all_blocks=1 00:13:35.603 --rc geninfo_unexecuted_blocks=1 00:13:35.603 00:13:35.603 ' 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:35.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.603 --rc genhtml_branch_coverage=1 00:13:35.603 --rc genhtml_function_coverage=1 00:13:35.603 --rc genhtml_legend=1 00:13:35.603 --rc geninfo_all_blocks=1 00:13:35.603 --rc geninfo_unexecuted_blocks=1 00:13:35.603 00:13:35.603 ' 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:35.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:35.603 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:37.514 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.514 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:37.514 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:37.514 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:37.514 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:37.514 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:37.514 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:37.514 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:37.514 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:37.514 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:37.514 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:37.514 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:37.514 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:37.514 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:37.514 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:37.514 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.514 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:37.515 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:37.515 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:37.515 Found net devices under 0000:84:00.0: cvl_0_0 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:37.515 Found net devices under 0000:84:00.1: cvl_0_1 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:37.515 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:37.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:13:37.775 00:13:37.775 --- 10.0.0.2 ping statistics --- 00:13:37.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.775 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:13:37.775 00:13:37.775 --- 10.0.0.1 ping statistics --- 00:13:37.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.775 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=180732 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 180732 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 180732 ']' 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.775 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:37.775 [2024-12-06 19:12:22.692058] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:13:37.775 [2024-12-06 19:12:22.692153] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.775 [2024-12-06 19:12:22.765647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.034 [2024-12-06 19:12:22.824494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.034 [2024-12-06 19:12:22.824554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.034 [2024-12-06 19:12:22.824569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.034 [2024-12-06 19:12:22.824581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.034 [2024-12-06 19:12:22.824591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.034 [2024-12-06 19:12:22.825337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.034 [2024-12-06 19:12:22.967686] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.034 [2024-12-06 19:12:22.983927] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.034 NULL1 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.034 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.034 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.034 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:38.034 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.034 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.034 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.034 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:38.034 [2024-12-06 19:12:23.027244] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:13:38.034 [2024-12-06 19:12:23.027278] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180762 ] 00:13:38.600 Attached to nqn.2016-06.io.spdk:cnode1 00:13:38.600 Namespace ID: 1 size: 1GB 00:13:38.600 fused_ordering(0) 00:13:38.600 fused_ordering(1) 00:13:38.600 fused_ordering(2) 00:13:38.600 fused_ordering(3) 00:13:38.600 fused_ordering(4) 00:13:38.600 fused_ordering(5) 00:13:38.600 fused_ordering(6) 00:13:38.600 fused_ordering(7) 00:13:38.600 fused_ordering(8) 00:13:38.600 fused_ordering(9) 00:13:38.600 fused_ordering(10) 00:13:38.600 fused_ordering(11) 00:13:38.600 fused_ordering(12) 00:13:38.600 fused_ordering(13) 00:13:38.600 fused_ordering(14) 00:13:38.600 fused_ordering(15) 00:13:38.600 fused_ordering(16) 00:13:38.600 fused_ordering(17) 00:13:38.600 fused_ordering(18) 00:13:38.600 fused_ordering(19) 00:13:38.600 fused_ordering(20) 00:13:38.600 fused_ordering(21) 00:13:38.600 fused_ordering(22) 00:13:38.600 fused_ordering(23) 00:13:38.600 fused_ordering(24) 00:13:38.600 fused_ordering(25) 00:13:38.600 fused_ordering(26) 00:13:38.600 fused_ordering(27) 00:13:38.600 fused_ordering(28) 00:13:38.600 fused_ordering(29) 00:13:38.600 fused_ordering(30) 00:13:38.600 fused_ordering(31) 00:13:38.600 fused_ordering(32) 00:13:38.600 fused_ordering(33) 00:13:38.600 fused_ordering(34) 00:13:38.600 fused_ordering(35) 00:13:38.600 fused_ordering(36) 00:13:38.600 fused_ordering(37) 00:13:38.600 fused_ordering(38) 00:13:38.600 fused_ordering(39) 00:13:38.600 fused_ordering(40) 00:13:38.600 fused_ordering(41) 00:13:38.600 fused_ordering(42) 00:13:38.600 fused_ordering(43) 00:13:38.600 fused_ordering(44) 00:13:38.600 fused_ordering(45) 00:13:38.600 fused_ordering(46) 00:13:38.600 fused_ordering(47) 00:13:38.600 fused_ordering(48) 00:13:38.600 fused_ordering(49) 00:13:38.600 fused_ordering(50) 00:13:38.600 fused_ordering(51) 00:13:38.600 fused_ordering(52) 00:13:38.600 fused_ordering(53) 00:13:38.600 fused_ordering(54) 00:13:38.600 fused_ordering(55) 00:13:38.600 fused_ordering(56) 00:13:38.600 fused_ordering(57) 00:13:38.600 fused_ordering(58) 00:13:38.600 fused_ordering(59) 00:13:38.600 fused_ordering(60) 00:13:38.600 fused_ordering(61) 00:13:38.600 fused_ordering(62) 00:13:38.600 fused_ordering(63) 00:13:38.600 fused_ordering(64) 00:13:38.600 fused_ordering(65) 00:13:38.600 fused_ordering(66) 00:13:38.600 fused_ordering(67) 00:13:38.600 fused_ordering(68) 00:13:38.600 fused_ordering(69) 00:13:38.600 fused_ordering(70) 00:13:38.600 fused_ordering(71) 00:13:38.600 fused_ordering(72) 00:13:38.600 fused_ordering(73) 00:13:38.600 fused_ordering(74) 00:13:38.600 fused_ordering(75) 00:13:38.600 fused_ordering(76) 00:13:38.600 fused_ordering(77) 00:13:38.600 fused_ordering(78) 00:13:38.601 fused_ordering(79) 00:13:38.601 fused_ordering(80) 00:13:38.601 fused_ordering(81) 00:13:38.601 fused_ordering(82) 00:13:38.601 fused_ordering(83) 00:13:38.601 fused_ordering(84) 00:13:38.601 fused_ordering(85) 00:13:38.601 fused_ordering(86) 00:13:38.601 fused_ordering(87) 00:13:38.601 fused_ordering(88) 00:13:38.601 fused_ordering(89) 00:13:38.601 fused_ordering(90) 00:13:38.601 fused_ordering(91) 00:13:38.601 fused_ordering(92) 00:13:38.601 fused_ordering(93) 00:13:38.601 fused_ordering(94) 00:13:38.601 fused_ordering(95) 00:13:38.601 fused_ordering(96) 00:13:38.601 fused_ordering(97) 00:13:38.601 fused_ordering(98) 00:13:38.601 fused_ordering(99) 00:13:38.601 fused_ordering(100) 00:13:38.601 fused_ordering(101) 00:13:38.601 fused_ordering(102) 00:13:38.601 fused_ordering(103) 00:13:38.601 fused_ordering(104) 00:13:38.601 fused_ordering(105) 00:13:38.601 fused_ordering(106) 00:13:38.601 fused_ordering(107) 00:13:38.601 fused_ordering(108) 00:13:38.601 fused_ordering(109) 00:13:38.601 fused_ordering(110) 00:13:38.601 fused_ordering(111) 00:13:38.601 fused_ordering(112) 00:13:38.601 fused_ordering(113) 00:13:38.601 fused_ordering(114) 00:13:38.601 fused_ordering(115) 00:13:38.601 fused_ordering(116) 00:13:38.601 fused_ordering(117) 00:13:38.601 fused_ordering(118) 00:13:38.601 fused_ordering(119) 00:13:38.601 fused_ordering(120) 00:13:38.601 fused_ordering(121) 00:13:38.601 fused_ordering(122) 00:13:38.601 fused_ordering(123) 00:13:38.601 fused_ordering(124) 00:13:38.601 fused_ordering(125) 00:13:38.601 fused_ordering(126) 00:13:38.601 fused_ordering(127) 00:13:38.601 fused_ordering(128) 00:13:38.601 fused_ordering(129) 00:13:38.601 fused_ordering(130) 00:13:38.601 fused_ordering(131) 00:13:38.601 fused_ordering(132) 00:13:38.601 fused_ordering(133) 00:13:38.601 fused_ordering(134) 00:13:38.601 fused_ordering(135) 00:13:38.601 fused_ordering(136) 00:13:38.601 fused_ordering(137) 00:13:38.601 fused_ordering(138) 00:13:38.601 fused_ordering(139) 00:13:38.601 fused_ordering(140) 00:13:38.601 fused_ordering(141) 00:13:38.601 fused_ordering(142) 00:13:38.601 fused_ordering(143) 00:13:38.601 fused_ordering(144) 00:13:38.601 fused_ordering(145) 00:13:38.601 fused_ordering(146) 00:13:38.601 fused_ordering(147) 00:13:38.601 fused_ordering(148) 00:13:38.601 fused_ordering(149) 00:13:38.601 fused_ordering(150) 00:13:38.601 fused_ordering(151) 00:13:38.601 fused_ordering(152) 00:13:38.601 fused_ordering(153) 00:13:38.601 fused_ordering(154) 00:13:38.601 fused_ordering(155) 00:13:38.601 fused_ordering(156) 00:13:38.601 fused_ordering(157) 00:13:38.601 fused_ordering(158) 00:13:38.601 fused_ordering(159) 00:13:38.601 fused_ordering(160) 00:13:38.601 fused_ordering(161) 00:13:38.601 fused_ordering(162) 00:13:38.601 fused_ordering(163) 00:13:38.601 fused_ordering(164) 00:13:38.601 fused_ordering(165) 00:13:38.601 fused_ordering(166) 00:13:38.601 fused_ordering(167) 00:13:38.601 fused_ordering(168) 00:13:38.601 fused_ordering(169) 00:13:38.601 fused_ordering(170) 00:13:38.601 fused_ordering(171) 00:13:38.601 fused_ordering(172) 00:13:38.601 fused_ordering(173) 00:13:38.601 fused_ordering(174) 00:13:38.601 fused_ordering(175) 00:13:38.601 fused_ordering(176) 00:13:38.601 fused_ordering(177) 00:13:38.601 fused_ordering(178) 00:13:38.601 fused_ordering(179) 00:13:38.601 fused_ordering(180) 00:13:38.601 fused_ordering(181) 00:13:38.601 fused_ordering(182) 00:13:38.601 fused_ordering(183) 00:13:38.601 fused_ordering(184) 00:13:38.601 fused_ordering(185) 00:13:38.601 fused_ordering(186) 00:13:38.601 fused_ordering(187) 00:13:38.601 fused_ordering(188) 00:13:38.601 fused_ordering(189) 00:13:38.601 fused_ordering(190) 00:13:38.601 fused_ordering(191) 00:13:38.601 fused_ordering(192) 00:13:38.601 fused_ordering(193) 00:13:38.601 fused_ordering(194) 00:13:38.601 fused_ordering(195) 00:13:38.601 fused_ordering(196) 00:13:38.601 fused_ordering(197) 00:13:38.601 fused_ordering(198) 00:13:38.601 fused_ordering(199) 00:13:38.601 fused_ordering(200) 00:13:38.601 fused_ordering(201) 00:13:38.601 fused_ordering(202) 00:13:38.601 fused_ordering(203) 00:13:38.601 fused_ordering(204) 00:13:38.601 fused_ordering(205) 00:13:39.168 fused_ordering(206) 00:13:39.168 fused_ordering(207) 00:13:39.168 fused_ordering(208) 00:13:39.168 fused_ordering(209) 00:13:39.168 fused_ordering(210) 00:13:39.168 fused_ordering(211) 00:13:39.168 fused_ordering(212) 00:13:39.168 fused_ordering(213) 00:13:39.168 fused_ordering(214) 00:13:39.168 fused_ordering(215) 00:13:39.168 fused_ordering(216) 00:13:39.168 fused_ordering(217) 00:13:39.168 fused_ordering(218) 00:13:39.168 fused_ordering(219) 00:13:39.168 fused_ordering(220) 00:13:39.168 fused_ordering(221) 00:13:39.168 fused_ordering(222) 00:13:39.168 fused_ordering(223) 00:13:39.168 fused_ordering(224) 00:13:39.168 fused_ordering(225) 00:13:39.168 fused_ordering(226) 00:13:39.168 fused_ordering(227) 00:13:39.168 fused_ordering(228) 00:13:39.168 fused_ordering(229) 00:13:39.168 fused_ordering(230) 00:13:39.168 fused_ordering(231) 00:13:39.168 fused_ordering(232) 00:13:39.168 fused_ordering(233) 00:13:39.168 fused_ordering(234) 00:13:39.168 fused_ordering(235) 00:13:39.168 fused_ordering(236) 00:13:39.168 fused_ordering(237) 00:13:39.168 fused_ordering(238) 00:13:39.168 fused_ordering(239) 00:13:39.168 fused_ordering(240) 00:13:39.168 fused_ordering(241) 00:13:39.168 fused_ordering(242) 00:13:39.168 fused_ordering(243) 00:13:39.168 fused_ordering(244) 00:13:39.168 fused_ordering(245) 00:13:39.168 fused_ordering(246) 00:13:39.168 fused_ordering(247) 00:13:39.168 fused_ordering(248) 00:13:39.168 fused_ordering(249) 00:13:39.168 fused_ordering(250) 00:13:39.168 fused_ordering(251) 00:13:39.168 fused_ordering(252) 00:13:39.168 fused_ordering(253) 00:13:39.168 fused_ordering(254) 00:13:39.168 fused_ordering(255) 00:13:39.168 fused_ordering(256) 00:13:39.168 fused_ordering(257) 00:13:39.168 fused_ordering(258) 00:13:39.168 fused_ordering(259) 00:13:39.168 fused_ordering(260) 00:13:39.168 fused_ordering(261) 00:13:39.168 fused_ordering(262) 00:13:39.168 fused_ordering(263) 00:13:39.168 fused_ordering(264) 00:13:39.168 fused_ordering(265) 00:13:39.168 fused_ordering(266) 00:13:39.168 fused_ordering(267) 00:13:39.168 fused_ordering(268) 00:13:39.168 fused_ordering(269) 00:13:39.168 fused_ordering(270) 00:13:39.168 fused_ordering(271) 00:13:39.168 fused_ordering(272) 00:13:39.168 fused_ordering(273) 00:13:39.168 fused_ordering(274) 00:13:39.168 fused_ordering(275) 00:13:39.168 fused_ordering(276) 00:13:39.168 fused_ordering(277) 00:13:39.168 fused_ordering(278) 00:13:39.168 fused_ordering(279) 00:13:39.168 fused_ordering(280) 00:13:39.168 fused_ordering(281) 00:13:39.168 fused_ordering(282) 00:13:39.168 fused_ordering(283) 00:13:39.168 fused_ordering(284) 00:13:39.168 fused_ordering(285) 00:13:39.168 fused_ordering(286) 00:13:39.168 fused_ordering(287) 00:13:39.168 fused_ordering(288) 00:13:39.168 fused_ordering(289) 00:13:39.168 fused_ordering(290) 00:13:39.168 fused_ordering(291) 00:13:39.168 fused_ordering(292) 00:13:39.168 fused_ordering(293) 00:13:39.168 fused_ordering(294) 00:13:39.168 fused_ordering(295) 00:13:39.168 fused_ordering(296) 00:13:39.168 fused_ordering(297) 00:13:39.168 fused_ordering(298) 00:13:39.168 fused_ordering(299) 00:13:39.168 fused_ordering(300) 00:13:39.168 fused_ordering(301) 00:13:39.168 fused_ordering(302) 00:13:39.168 fused_ordering(303) 00:13:39.168 fused_ordering(304) 00:13:39.168 fused_ordering(305) 00:13:39.168 fused_ordering(306) 00:13:39.168 fused_ordering(307) 00:13:39.168 fused_ordering(308) 00:13:39.168 fused_ordering(309) 00:13:39.168 fused_ordering(310) 00:13:39.168 fused_ordering(311) 00:13:39.168 fused_ordering(312) 00:13:39.168 fused_ordering(313) 00:13:39.168 fused_ordering(314) 00:13:39.168 fused_ordering(315) 00:13:39.168 fused_ordering(316) 00:13:39.168 fused_ordering(317) 00:13:39.168 fused_ordering(318) 00:13:39.168 fused_ordering(319) 00:13:39.168 fused_ordering(320) 00:13:39.168 fused_ordering(321) 00:13:39.168 fused_ordering(322) 00:13:39.168 fused_ordering(323) 00:13:39.168 fused_ordering(324) 00:13:39.168 fused_ordering(325) 00:13:39.168 fused_ordering(326) 00:13:39.168 fused_ordering(327) 00:13:39.168 fused_ordering(328) 00:13:39.168 fused_ordering(329) 00:13:39.168 fused_ordering(330) 00:13:39.168 fused_ordering(331) 00:13:39.168 fused_ordering(332) 00:13:39.168 fused_ordering(333) 00:13:39.168 fused_ordering(334) 00:13:39.168 fused_ordering(335) 00:13:39.168 fused_ordering(336) 00:13:39.168 fused_ordering(337) 00:13:39.168 fused_ordering(338) 00:13:39.168 fused_ordering(339) 00:13:39.168 fused_ordering(340) 00:13:39.168 fused_ordering(341) 00:13:39.168 fused_ordering(342) 00:13:39.168 fused_ordering(343) 00:13:39.168 fused_ordering(344) 00:13:39.168 fused_ordering(345) 00:13:39.168 fused_ordering(346) 00:13:39.168 fused_ordering(347) 00:13:39.168 fused_ordering(348) 00:13:39.168 fused_ordering(349) 00:13:39.168 fused_ordering(350) 00:13:39.168 fused_ordering(351) 00:13:39.168 fused_ordering(352) 00:13:39.168 fused_ordering(353) 00:13:39.168 fused_ordering(354) 00:13:39.168 fused_ordering(355) 00:13:39.168 fused_ordering(356) 00:13:39.168 fused_ordering(357) 00:13:39.168 fused_ordering(358) 00:13:39.168 fused_ordering(359) 00:13:39.168 fused_ordering(360) 00:13:39.168 fused_ordering(361) 00:13:39.168 fused_ordering(362) 00:13:39.168 fused_ordering(363) 00:13:39.168 fused_ordering(364) 00:13:39.168 fused_ordering(365) 00:13:39.168 fused_ordering(366) 00:13:39.168 fused_ordering(367) 00:13:39.168 fused_ordering(368) 00:13:39.168 fused_ordering(369) 00:13:39.168 fused_ordering(370) 00:13:39.168 fused_ordering(371) 00:13:39.168 fused_ordering(372) 00:13:39.168 fused_ordering(373) 00:13:39.168 fused_ordering(374) 00:13:39.168 fused_ordering(375) 00:13:39.168 fused_ordering(376) 00:13:39.168 fused_ordering(377) 00:13:39.168 fused_ordering(378) 00:13:39.168 fused_ordering(379) 00:13:39.168 fused_ordering(380) 00:13:39.168 fused_ordering(381) 00:13:39.168 fused_ordering(382) 00:13:39.168 fused_ordering(383) 00:13:39.168 fused_ordering(384) 00:13:39.168 fused_ordering(385) 00:13:39.168 fused_ordering(386) 00:13:39.168 fused_ordering(387) 00:13:39.168 fused_ordering(388) 00:13:39.168 fused_ordering(389) 00:13:39.168 fused_ordering(390) 00:13:39.168 fused_ordering(391) 00:13:39.168 fused_ordering(392) 00:13:39.168 fused_ordering(393) 00:13:39.168 fused_ordering(394) 00:13:39.168 fused_ordering(395) 00:13:39.168 fused_ordering(396) 00:13:39.168 fused_ordering(397) 00:13:39.168 fused_ordering(398) 00:13:39.168 fused_ordering(399) 00:13:39.168 fused_ordering(400) 00:13:39.168 fused_ordering(401) 00:13:39.168 fused_ordering(402) 00:13:39.168 fused_ordering(403) 00:13:39.168 fused_ordering(404) 00:13:39.168 fused_ordering(405) 00:13:39.168 fused_ordering(406) 00:13:39.168 fused_ordering(407) 00:13:39.168 fused_ordering(408) 00:13:39.168 fused_ordering(409) 00:13:39.168 fused_ordering(410) 00:13:39.428 fused_ordering(411) 00:13:39.428 fused_ordering(412) 00:13:39.428 fused_ordering(413) 00:13:39.428 fused_ordering(414) 00:13:39.428 fused_ordering(415) 00:13:39.428 fused_ordering(416) 00:13:39.428 fused_ordering(417) 00:13:39.428 fused_ordering(418) 00:13:39.428 fused_ordering(419) 00:13:39.428 fused_ordering(420) 00:13:39.428 fused_ordering(421) 00:13:39.428 fused_ordering(422) 00:13:39.428 fused_ordering(423) 00:13:39.428 fused_ordering(424) 00:13:39.428 fused_ordering(425) 00:13:39.428 fused_ordering(426) 00:13:39.428 fused_ordering(427) 00:13:39.428 fused_ordering(428) 00:13:39.428 fused_ordering(429) 00:13:39.428 fused_ordering(430) 00:13:39.428 fused_ordering(431) 00:13:39.428 fused_ordering(432) 00:13:39.428 fused_ordering(433) 00:13:39.428 fused_ordering(434) 00:13:39.428 fused_ordering(435) 00:13:39.428 fused_ordering(436) 00:13:39.428 fused_ordering(437) 00:13:39.428 fused_ordering(438) 00:13:39.428 fused_ordering(439) 00:13:39.428 fused_ordering(440) 00:13:39.428 fused_ordering(441) 00:13:39.428 fused_ordering(442) 00:13:39.428 fused_ordering(443) 00:13:39.428 fused_ordering(444) 00:13:39.428 fused_ordering(445) 00:13:39.428 fused_ordering(446) 00:13:39.428 fused_ordering(447) 00:13:39.428 fused_ordering(448) 00:13:39.428 fused_ordering(449) 00:13:39.428 fused_ordering(450) 00:13:39.428 fused_ordering(451) 00:13:39.428 fused_ordering(452) 00:13:39.428 fused_ordering(453) 00:13:39.428 fused_ordering(454) 00:13:39.428 fused_ordering(455) 00:13:39.428 fused_ordering(456) 00:13:39.428 fused_ordering(457) 00:13:39.428 fused_ordering(458) 00:13:39.428 fused_ordering(459) 00:13:39.428 fused_ordering(460) 00:13:39.428 fused_ordering(461) 00:13:39.428 fused_ordering(462) 00:13:39.428 fused_ordering(463) 00:13:39.428 fused_ordering(464) 00:13:39.428 fused_ordering(465) 00:13:39.428 fused_ordering(466) 00:13:39.428 fused_ordering(467) 00:13:39.428 fused_ordering(468) 00:13:39.428 fused_ordering(469) 00:13:39.428 fused_ordering(470) 00:13:39.428 fused_ordering(471) 00:13:39.428 fused_ordering(472) 00:13:39.428 fused_ordering(473) 00:13:39.428 fused_ordering(474) 00:13:39.428 fused_ordering(475) 00:13:39.428 fused_ordering(476) 00:13:39.428 fused_ordering(477) 00:13:39.428 fused_ordering(478) 00:13:39.428 fused_ordering(479) 00:13:39.428 fused_ordering(480) 00:13:39.428 fused_ordering(481) 00:13:39.428 fused_ordering(482) 00:13:39.428 fused_ordering(483) 00:13:39.428 fused_ordering(484) 00:13:39.428 fused_ordering(485) 00:13:39.428 fused_ordering(486) 00:13:39.428 fused_ordering(487) 00:13:39.428 fused_ordering(488) 00:13:39.428 fused_ordering(489) 00:13:39.428 fused_ordering(490) 00:13:39.428 fused_ordering(491) 00:13:39.428 fused_ordering(492) 00:13:39.428 fused_ordering(493) 00:13:39.428 fused_ordering(494) 00:13:39.428 fused_ordering(495) 00:13:39.428 fused_ordering(496) 00:13:39.428 fused_ordering(497) 00:13:39.428 fused_ordering(498) 00:13:39.428 fused_ordering(499) 00:13:39.428 fused_ordering(500) 00:13:39.428 fused_ordering(501) 00:13:39.428 fused_ordering(502) 00:13:39.428 fused_ordering(503) 00:13:39.428 fused_ordering(504) 00:13:39.428 fused_ordering(505) 00:13:39.428 fused_ordering(506) 00:13:39.428 fused_ordering(507) 00:13:39.428 fused_ordering(508) 00:13:39.428 fused_ordering(509) 00:13:39.428 fused_ordering(510) 00:13:39.428 fused_ordering(511) 00:13:39.428 fused_ordering(512) 00:13:39.428 fused_ordering(513) 00:13:39.428 fused_ordering(514) 00:13:39.428 fused_ordering(515) 00:13:39.428 fused_ordering(516) 00:13:39.428 fused_ordering(517) 00:13:39.428 fused_ordering(518) 00:13:39.428 fused_ordering(519) 00:13:39.428 fused_ordering(520) 00:13:39.428 fused_ordering(521) 00:13:39.428 fused_ordering(522) 00:13:39.428 fused_ordering(523) 00:13:39.428 fused_ordering(524) 00:13:39.428 fused_ordering(525) 00:13:39.428 fused_ordering(526) 00:13:39.428 fused_ordering(527) 00:13:39.428 fused_ordering(528) 00:13:39.428 fused_ordering(529) 00:13:39.428 fused_ordering(530) 00:13:39.428 fused_ordering(531) 00:13:39.428 fused_ordering(532) 00:13:39.428 fused_ordering(533) 00:13:39.428 fused_ordering(534) 00:13:39.428 fused_ordering(535) 00:13:39.428 fused_ordering(536) 00:13:39.428 fused_ordering(537) 00:13:39.428 fused_ordering(538) 00:13:39.428 fused_ordering(539) 00:13:39.428 fused_ordering(540) 00:13:39.428 fused_ordering(541) 00:13:39.428 fused_ordering(542) 00:13:39.428 fused_ordering(543) 00:13:39.428 fused_ordering(544) 00:13:39.428 fused_ordering(545) 00:13:39.428 fused_ordering(546) 00:13:39.428 fused_ordering(547) 00:13:39.428 fused_ordering(548) 00:13:39.428 fused_ordering(549) 00:13:39.428 fused_ordering(550) 00:13:39.428 fused_ordering(551) 00:13:39.428 fused_ordering(552) 00:13:39.428 fused_ordering(553) 00:13:39.428 fused_ordering(554) 00:13:39.428 fused_ordering(555) 00:13:39.428 fused_ordering(556) 00:13:39.428 fused_ordering(557) 00:13:39.428 fused_ordering(558) 00:13:39.428 fused_ordering(559) 00:13:39.428 fused_ordering(560) 00:13:39.428 fused_ordering(561) 00:13:39.428 fused_ordering(562) 00:13:39.428 fused_ordering(563) 00:13:39.428 fused_ordering(564) 00:13:39.428 fused_ordering(565) 00:13:39.428 fused_ordering(566) 00:13:39.428 fused_ordering(567) 00:13:39.428 fused_ordering(568) 00:13:39.428 fused_ordering(569) 00:13:39.428 fused_ordering(570) 00:13:39.428 fused_ordering(571) 00:13:39.428 fused_ordering(572) 00:13:39.428 fused_ordering(573) 00:13:39.428 fused_ordering(574) 00:13:39.428 fused_ordering(575) 00:13:39.428 fused_ordering(576) 00:13:39.428 fused_ordering(577) 00:13:39.428 fused_ordering(578) 00:13:39.428 fused_ordering(579) 00:13:39.428 fused_ordering(580) 00:13:39.428 fused_ordering(581) 00:13:39.428 fused_ordering(582) 00:13:39.428 fused_ordering(583) 00:13:39.428 fused_ordering(584) 00:13:39.428 fused_ordering(585) 00:13:39.428 fused_ordering(586) 00:13:39.428 fused_ordering(587) 00:13:39.428 fused_ordering(588) 00:13:39.428 fused_ordering(589) 00:13:39.428 fused_ordering(590) 00:13:39.428 fused_ordering(591) 00:13:39.428 fused_ordering(592) 00:13:39.428 fused_ordering(593) 00:13:39.428 fused_ordering(594) 00:13:39.428 fused_ordering(595) 00:13:39.428 fused_ordering(596) 00:13:39.428 fused_ordering(597) 00:13:39.428 fused_ordering(598) 00:13:39.428 fused_ordering(599) 00:13:39.428 fused_ordering(600) 00:13:39.428 fused_ordering(601) 00:13:39.428 fused_ordering(602) 00:13:39.428 fused_ordering(603) 00:13:39.428 fused_ordering(604) 00:13:39.428 fused_ordering(605) 00:13:39.428 fused_ordering(606) 00:13:39.428 fused_ordering(607) 00:13:39.428 fused_ordering(608) 00:13:39.428 fused_ordering(609) 00:13:39.428 fused_ordering(610) 00:13:39.428 fused_ordering(611) 00:13:39.428 fused_ordering(612) 00:13:39.428 fused_ordering(613) 00:13:39.428 fused_ordering(614) 00:13:39.428 fused_ordering(615) 00:13:39.687 fused_ordering(616) 00:13:39.687 fused_ordering(617) 00:13:39.687 fused_ordering(618) 00:13:39.687 fused_ordering(619) 00:13:39.687 fused_ordering(620) 00:13:39.687 fused_ordering(621) 00:13:39.687 fused_ordering(622) 00:13:39.687 fused_ordering(623) 00:13:39.687 fused_ordering(624) 00:13:39.687 fused_ordering(625) 00:13:39.687 fused_ordering(626) 00:13:39.687 fused_ordering(627) 00:13:39.687 fused_ordering(628) 00:13:39.687 fused_ordering(629) 00:13:39.687 fused_ordering(630) 00:13:39.687 fused_ordering(631) 00:13:39.687 fused_ordering(632) 00:13:39.687 fused_ordering(633) 00:13:39.687 fused_ordering(634) 00:13:39.687 fused_ordering(635) 00:13:39.687 fused_ordering(636) 00:13:39.687 fused_ordering(637) 00:13:39.687 fused_ordering(638) 00:13:39.687 fused_ordering(639) 00:13:39.687 fused_ordering(640) 00:13:39.687 fused_ordering(641) 00:13:39.687 fused_ordering(642) 00:13:39.687 fused_ordering(643) 00:13:39.687 fused_ordering(644) 00:13:39.687 fused_ordering(645) 00:13:39.687 fused_ordering(646) 00:13:39.687 fused_ordering(647) 00:13:39.687 fused_ordering(648) 00:13:39.687 fused_ordering(649) 00:13:39.687 fused_ordering(650) 00:13:39.687 fused_ordering(651) 00:13:39.687 fused_ordering(652) 00:13:39.687 fused_ordering(653) 00:13:39.687 fused_ordering(654) 00:13:39.687 fused_ordering(655) 00:13:39.687 fused_ordering(656) 00:13:39.687 fused_ordering(657) 00:13:39.687 fused_ordering(658) 00:13:39.688 fused_ordering(659) 00:13:39.688 fused_ordering(660) 00:13:39.688 fused_ordering(661) 00:13:39.688 fused_ordering(662) 00:13:39.688 fused_ordering(663) 00:13:39.688 fused_ordering(664) 00:13:39.688 fused_ordering(665) 00:13:39.688 fused_ordering(666) 00:13:39.688 fused_ordering(667) 00:13:39.688 fused_ordering(668) 00:13:39.688 fused_ordering(669) 00:13:39.688 fused_ordering(670) 00:13:39.688 fused_ordering(671) 00:13:39.688 fused_ordering(672) 00:13:39.688 fused_ordering(673) 00:13:39.688 fused_ordering(674) 00:13:39.688 fused_ordering(675) 00:13:39.688 fused_ordering(676) 00:13:39.688 fused_ordering(677) 00:13:39.688 fused_ordering(678) 00:13:39.688 fused_ordering(679) 00:13:39.688 fused_ordering(680) 00:13:39.688 fused_ordering(681) 00:13:39.688 fused_ordering(682) 00:13:39.688 fused_ordering(683) 00:13:39.688 fused_ordering(684) 00:13:39.688 fused_ordering(685) 00:13:39.688 fused_ordering(686) 00:13:39.688 fused_ordering(687) 00:13:39.688 fused_ordering(688) 00:13:39.688 fused_ordering(689) 00:13:39.688 fused_ordering(690) 00:13:39.688 fused_ordering(691) 00:13:39.688 fused_ordering(692) 00:13:39.688 fused_ordering(693) 00:13:39.688 fused_ordering(694) 00:13:39.688 fused_ordering(695) 00:13:39.688 fused_ordering(696) 00:13:39.688 fused_ordering(697) 00:13:39.688 fused_ordering(698) 00:13:39.688 fused_ordering(699) 00:13:39.688 fused_ordering(700) 00:13:39.688 fused_ordering(701) 00:13:39.688 fused_ordering(702) 00:13:39.688 fused_ordering(703) 00:13:39.688 fused_ordering(704) 00:13:39.688 fused_ordering(705) 00:13:39.688 fused_ordering(706) 00:13:39.688 fused_ordering(707) 00:13:39.688 fused_ordering(708) 00:13:39.688 fused_ordering(709) 00:13:39.688 fused_ordering(710) 00:13:39.688 fused_ordering(711) 00:13:39.688 fused_ordering(712) 00:13:39.688 fused_ordering(713) 00:13:39.688 fused_ordering(714) 00:13:39.688 fused_ordering(715) 00:13:39.688 fused_ordering(716) 00:13:39.688 fused_ordering(717) 00:13:39.688 fused_ordering(718) 00:13:39.688 fused_ordering(719) 00:13:39.688 fused_ordering(720) 00:13:39.688 fused_ordering(721) 00:13:39.688 fused_ordering(722) 00:13:39.688 fused_ordering(723) 00:13:39.688 fused_ordering(724) 00:13:39.688 fused_ordering(725) 00:13:39.688 fused_ordering(726) 00:13:39.688 fused_ordering(727) 00:13:39.688 fused_ordering(728) 00:13:39.688 fused_ordering(729) 00:13:39.688 fused_ordering(730) 00:13:39.688 fused_ordering(731) 00:13:39.688 fused_ordering(732) 00:13:39.688 fused_ordering(733) 00:13:39.688 fused_ordering(734) 00:13:39.688 fused_ordering(735) 00:13:39.688 fused_ordering(736) 00:13:39.688 fused_ordering(737) 00:13:39.688 fused_ordering(738) 00:13:39.688 fused_ordering(739) 00:13:39.688 fused_ordering(740) 00:13:39.688 fused_ordering(741) 00:13:39.688 fused_ordering(742) 00:13:39.688 fused_ordering(743) 00:13:39.688 fused_ordering(744) 00:13:39.688 fused_ordering(745) 00:13:39.688 fused_ordering(746) 00:13:39.688 fused_ordering(747) 00:13:39.688 fused_ordering(748) 00:13:39.688 fused_ordering(749) 00:13:39.688 fused_ordering(750) 00:13:39.688 fused_ordering(751) 00:13:39.688 fused_ordering(752) 00:13:39.688 fused_ordering(753) 00:13:39.688 fused_ordering(754) 00:13:39.688 fused_ordering(755) 00:13:39.688 fused_ordering(756) 00:13:39.688 fused_ordering(757) 00:13:39.688 fused_ordering(758) 00:13:39.688 fused_ordering(759) 00:13:39.688 fused_ordering(760) 00:13:39.688 fused_ordering(761) 00:13:39.688 fused_ordering(762) 00:13:39.688 fused_ordering(763) 00:13:39.688 fused_ordering(764) 00:13:39.688 fused_ordering(765) 00:13:39.688 fused_ordering(766) 00:13:39.688 fused_ordering(767) 00:13:39.688 fused_ordering(768) 00:13:39.688 fused_ordering(769) 00:13:39.688 fused_ordering(770) 00:13:39.688 fused_ordering(771) 00:13:39.688 fused_ordering(772) 00:13:39.688 fused_ordering(773) 00:13:39.688 fused_ordering(774) 00:13:39.688 fused_ordering(775) 00:13:39.688 fused_ordering(776) 00:13:39.688 fused_ordering(777) 00:13:39.688 fused_ordering(778) 00:13:39.688 fused_ordering(779) 00:13:39.688 fused_ordering(780) 00:13:39.688 fused_ordering(781) 00:13:39.688 fused_ordering(782) 00:13:39.688 fused_ordering(783) 00:13:39.688 fused_ordering(784) 00:13:39.688 fused_ordering(785) 00:13:39.688 fused_ordering(786) 00:13:39.688 fused_ordering(787) 00:13:39.688 fused_ordering(788) 00:13:39.688 fused_ordering(789) 00:13:39.688 fused_ordering(790) 00:13:39.688 fused_ordering(791) 00:13:39.688 fused_ordering(792) 00:13:39.688 fused_ordering(793) 00:13:39.688 fused_ordering(794) 00:13:39.688 fused_ordering(795) 00:13:39.688 fused_ordering(796) 00:13:39.688 fused_ordering(797) 00:13:39.688 fused_ordering(798) 00:13:39.688 fused_ordering(799) 00:13:39.688 fused_ordering(800) 00:13:39.688 fused_ordering(801) 00:13:39.688 fused_ordering(802) 00:13:39.688 fused_ordering(803) 00:13:39.688 fused_ordering(804) 00:13:39.688 fused_ordering(805) 00:13:39.688 fused_ordering(806) 00:13:39.688 fused_ordering(807) 00:13:39.688 fused_ordering(808) 00:13:39.688 fused_ordering(809) 00:13:39.688 fused_ordering(810) 00:13:39.688 fused_ordering(811) 00:13:39.688 fused_ordering(812) 00:13:39.688 fused_ordering(813) 00:13:39.688 fused_ordering(814) 00:13:39.688 fused_ordering(815) 00:13:39.688 fused_ordering(816) 00:13:39.688 fused_ordering(817) 00:13:39.688 fused_ordering(818) 00:13:39.688 fused_ordering(819) 00:13:39.688 fused_ordering(820) 00:13:40.269 fused_ordering(821) 00:13:40.269 fused_ordering(822) 00:13:40.269 fused_ordering(823) 00:13:40.269 fused_ordering(824) 00:13:40.269 fused_ordering(825) 00:13:40.269 fused_ordering(826) 00:13:40.269 fused_ordering(827) 00:13:40.269 fused_ordering(828) 00:13:40.269 fused_ordering(829) 00:13:40.269 fused_ordering(830) 00:13:40.269 fused_ordering(831) 00:13:40.269 fused_ordering(832) 00:13:40.269 fused_ordering(833) 00:13:40.269 fused_ordering(834) 00:13:40.269 fused_ordering(835) 00:13:40.269 fused_ordering(836) 00:13:40.270 fused_ordering(837) 00:13:40.270 fused_ordering(838) 00:13:40.270 fused_ordering(839) 00:13:40.270 fused_ordering(840) 00:13:40.270 fused_ordering(841) 00:13:40.270 fused_ordering(842) 00:13:40.270 fused_ordering(843) 00:13:40.270 fused_ordering(844) 00:13:40.270 fused_ordering(845) 00:13:40.270 fused_ordering(846) 00:13:40.270 fused_ordering(847) 00:13:40.270 fused_ordering(848) 00:13:40.270 fused_ordering(849) 00:13:40.270 fused_ordering(850) 00:13:40.270 fused_ordering(851) 00:13:40.270 fused_ordering(852) 00:13:40.270 fused_ordering(853) 00:13:40.270 fused_ordering(854) 00:13:40.270 fused_ordering(855) 00:13:40.270 fused_ordering(856) 00:13:40.270 fused_ordering(857) 00:13:40.270 fused_ordering(858) 00:13:40.270 fused_ordering(859) 00:13:40.270 fused_ordering(860) 00:13:40.270 fused_ordering(861) 00:13:40.270 fused_ordering(862) 00:13:40.270 fused_ordering(863) 00:13:40.270 fused_ordering(864) 00:13:40.270 fused_ordering(865) 00:13:40.270 fused_ordering(866) 00:13:40.270 fused_ordering(867) 00:13:40.270 fused_ordering(868) 00:13:40.270 fused_ordering(869) 00:13:40.270 fused_ordering(870) 00:13:40.270 fused_ordering(871) 00:13:40.270 fused_ordering(872) 00:13:40.270 fused_ordering(873) 00:13:40.270 fused_ordering(874) 00:13:40.270 fused_ordering(875) 00:13:40.270 fused_ordering(876) 00:13:40.270 fused_ordering(877) 00:13:40.270 fused_ordering(878) 00:13:40.270 fused_ordering(879) 00:13:40.270 fused_ordering(880) 00:13:40.270 fused_ordering(881) 00:13:40.270 fused_ordering(882) 00:13:40.270 fused_ordering(883) 00:13:40.270 fused_ordering(884) 00:13:40.270 fused_ordering(885) 00:13:40.270 fused_ordering(886) 00:13:40.270 fused_ordering(887) 00:13:40.270 fused_ordering(888) 00:13:40.270 fused_ordering(889) 00:13:40.270 fused_ordering(890) 00:13:40.270 fused_ordering(891) 00:13:40.270 fused_ordering(892) 00:13:40.270 fused_ordering(893) 00:13:40.270 fused_ordering(894) 00:13:40.270 fused_ordering(895) 00:13:40.270 fused_ordering(896) 00:13:40.270 fused_ordering(897) 00:13:40.270 fused_ordering(898) 00:13:40.270 fused_ordering(899) 00:13:40.270 fused_ordering(900) 00:13:40.270 fused_ordering(901) 00:13:40.270 fused_ordering(902) 00:13:40.270 fused_ordering(903) 00:13:40.270 fused_ordering(904) 00:13:40.270 fused_ordering(905) 00:13:40.270 fused_ordering(906) 00:13:40.270 fused_ordering(907) 00:13:40.270 fused_ordering(908) 00:13:40.270 fused_ordering(909) 00:13:40.270 fused_ordering(910) 00:13:40.270 fused_ordering(911) 00:13:40.270 fused_ordering(912) 00:13:40.270 fused_ordering(913) 00:13:40.270 fused_ordering(914) 00:13:40.270 fused_ordering(915) 00:13:40.270 fused_ordering(916) 00:13:40.270 fused_ordering(917) 00:13:40.270 fused_ordering(918) 00:13:40.270 fused_ordering(919) 00:13:40.270 fused_ordering(920) 00:13:40.270 fused_ordering(921) 00:13:40.270 fused_ordering(922) 00:13:40.270 fused_ordering(923) 00:13:40.270 fused_ordering(924) 00:13:40.270 fused_ordering(925) 00:13:40.270 fused_ordering(926) 00:13:40.270 fused_ordering(927) 00:13:40.270 fused_ordering(928) 00:13:40.270 fused_ordering(929) 00:13:40.270 fused_ordering(930) 00:13:40.270 fused_ordering(931) 00:13:40.270 fused_ordering(932) 00:13:40.270 fused_ordering(933) 00:13:40.270 fused_ordering(934) 00:13:40.270 fused_ordering(935) 00:13:40.270 fused_ordering(936) 00:13:40.270 fused_ordering(937) 00:13:40.270 fused_ordering(938) 00:13:40.270 fused_ordering(939) 00:13:40.270 fused_ordering(940) 00:13:40.270 fused_ordering(941) 00:13:40.270 fused_ordering(942) 00:13:40.270 fused_ordering(943) 00:13:40.270 fused_ordering(944) 00:13:40.270 fused_ordering(945) 00:13:40.270 fused_ordering(946) 00:13:40.270 fused_ordering(947) 00:13:40.270 fused_ordering(948) 00:13:40.270 fused_ordering(949) 00:13:40.270 fused_ordering(950) 00:13:40.270 fused_ordering(951) 00:13:40.270 fused_ordering(952) 00:13:40.270 fused_ordering(953) 00:13:40.270 fused_ordering(954) 00:13:40.270 fused_ordering(955) 00:13:40.270 fused_ordering(956) 00:13:40.270 fused_ordering(957) 00:13:40.270 fused_ordering(958) 00:13:40.270 fused_ordering(959) 00:13:40.270 fused_ordering(960) 00:13:40.270 fused_ordering(961) 00:13:40.270 fused_ordering(962) 00:13:40.270 fused_ordering(963) 00:13:40.270 fused_ordering(964) 00:13:40.270 fused_ordering(965) 00:13:40.270 fused_ordering(966) 00:13:40.270 fused_ordering(967) 00:13:40.270 fused_ordering(968) 00:13:40.270 fused_ordering(969) 00:13:40.270 fused_ordering(970) 00:13:40.270 fused_ordering(971) 00:13:40.270 fused_ordering(972) 00:13:40.270 fused_ordering(973) 00:13:40.270 fused_ordering(974) 00:13:40.270 fused_ordering(975) 00:13:40.270 fused_ordering(976) 00:13:40.270 fused_ordering(977) 00:13:40.270 fused_ordering(978) 00:13:40.270 fused_ordering(979) 00:13:40.270 fused_ordering(980) 00:13:40.270 fused_ordering(981) 00:13:40.270 fused_ordering(982) 00:13:40.270 fused_ordering(983) 00:13:40.270 fused_ordering(984) 00:13:40.270 fused_ordering(985) 00:13:40.270 fused_ordering(986) 00:13:40.270 fused_ordering(987) 00:13:40.270 fused_ordering(988) 00:13:40.270 fused_ordering(989) 00:13:40.270 fused_ordering(990) 00:13:40.270 fused_ordering(991) 00:13:40.270 fused_ordering(992) 00:13:40.270 fused_ordering(993) 00:13:40.270 fused_ordering(994) 00:13:40.270 fused_ordering(995) 00:13:40.270 fused_ordering(996) 00:13:40.270 fused_ordering(997) 00:13:40.270 fused_ordering(998) 00:13:40.270 fused_ordering(999) 00:13:40.270 fused_ordering(1000) 00:13:40.270 fused_ordering(1001) 00:13:40.270 fused_ordering(1002) 00:13:40.270 fused_ordering(1003) 00:13:40.270 fused_ordering(1004) 00:13:40.270 fused_ordering(1005) 00:13:40.270 fused_ordering(1006) 00:13:40.270 fused_ordering(1007) 00:13:40.270 fused_ordering(1008) 00:13:40.270 fused_ordering(1009) 00:13:40.270 fused_ordering(1010) 00:13:40.270 fused_ordering(1011) 00:13:40.270 fused_ordering(1012) 00:13:40.270 fused_ordering(1013) 00:13:40.270 fused_ordering(1014) 00:13:40.270 fused_ordering(1015) 00:13:40.270 fused_ordering(1016) 00:13:40.270 fused_ordering(1017) 00:13:40.270 fused_ordering(1018) 00:13:40.270 fused_ordering(1019) 00:13:40.270 fused_ordering(1020) 00:13:40.270 fused_ordering(1021) 00:13:40.270 fused_ordering(1022) 00:13:40.270 fused_ordering(1023) 00:13:40.270 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:40.270 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:40.270 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:40.270 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:40.270 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:40.270 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:40.270 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:40.270 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:40.270 rmmod nvme_tcp 00:13:40.270 rmmod nvme_fabrics 00:13:40.270 rmmod nvme_keyring 00:13:40.533 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:40.533 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:40.533 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:40.533 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 180732 ']' 00:13:40.533 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 180732 00:13:40.533 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 180732 ']' 00:13:40.533 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 180732 00:13:40.533 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:40.533 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.533 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 180732 00:13:40.533 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:40.533 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:40.533 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 180732' 00:13:40.533 killing process with pid 180732 00:13:40.533 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 180732 00:13:40.533 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 180732 00:13:40.790 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:40.790 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:40.790 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:40.790 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:40.790 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:40.790 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:40.790 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:40.790 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:40.790 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:40.790 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.790 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.790 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.700 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:42.700 00:13:42.700 real 0m7.571s 00:13:42.700 user 0m5.003s 00:13:42.700 sys 0m3.199s 00:13:42.700 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.700 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:42.700 ************************************ 00:13:42.700 END TEST nvmf_fused_ordering 00:13:42.700 ************************************ 00:13:42.700 19:12:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:42.700 19:12:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:42.700 19:12:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.700 19:12:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:42.700 ************************************ 00:13:42.700 START TEST nvmf_ns_masking 00:13:42.700 ************************************ 00:13:42.700 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:42.959 * Looking for test storage... 00:13:42.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.959 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:42.959 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:13:42.959 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:42.959 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:42.959 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:42.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.960 --rc genhtml_branch_coverage=1 00:13:42.960 --rc genhtml_function_coverage=1 00:13:42.960 --rc genhtml_legend=1 00:13:42.960 --rc geninfo_all_blocks=1 00:13:42.960 --rc geninfo_unexecuted_blocks=1 00:13:42.960 00:13:42.960 ' 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:42.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.960 --rc genhtml_branch_coverage=1 00:13:42.960 --rc genhtml_function_coverage=1 00:13:42.960 --rc genhtml_legend=1 00:13:42.960 --rc geninfo_all_blocks=1 00:13:42.960 --rc geninfo_unexecuted_blocks=1 00:13:42.960 00:13:42.960 ' 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:42.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.960 --rc genhtml_branch_coverage=1 00:13:42.960 --rc genhtml_function_coverage=1 00:13:42.960 --rc genhtml_legend=1 00:13:42.960 --rc geninfo_all_blocks=1 00:13:42.960 --rc geninfo_unexecuted_blocks=1 00:13:42.960 00:13:42.960 ' 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:42.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.960 --rc genhtml_branch_coverage=1 00:13:42.960 --rc genhtml_function_coverage=1 00:13:42.960 --rc genhtml_legend=1 00:13:42.960 --rc geninfo_all_blocks=1 00:13:42.960 --rc geninfo_unexecuted_blocks=1 00:13:42.960 00:13:42.960 ' 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:42.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=84a8add6-22c7-4d2d-9bad-15d33cdaf1d2 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=8a4a9a9f-838c-447c-8314-f0a429e961b9 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:42.960 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:42.961 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=5e230c75-0037-4726-bdf2-768687933f5e 00:13:42.961 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:42.961 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:42.961 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.961 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:42.961 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:42.961 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:42.961 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.961 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.961 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.961 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:42.961 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:42.961 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:42.961 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.498 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:45.499 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:45.499 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:45.499 Found net devices under 0000:84:00.0: cvl_0_0 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:45.499 Found net devices under 0000:84:00.1: cvl_0_1 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:45.499 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:45.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:13:45.500 00:13:45.500 --- 10.0.0.2 ping statistics --- 00:13:45.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.500 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:45.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:13:45.500 00:13:45.500 --- 10.0.0.1 ping statistics --- 00:13:45.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.500 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=183010 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 183010 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 183010 ']' 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:45.500 [2024-12-06 19:12:30.281805] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:13:45.500 [2024-12-06 19:12:30.281892] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.500 [2024-12-06 19:12:30.356813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.500 [2024-12-06 19:12:30.410882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.500 [2024-12-06 19:12:30.410948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.500 [2024-12-06 19:12:30.410972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.500 [2024-12-06 19:12:30.410984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.500 [2024-12-06 19:12:30.410994] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.500 [2024-12-06 19:12:30.411645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:45.500 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:45.759 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.759 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:46.017 [2024-12-06 19:12:30.856480] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.017 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:46.017 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:46.017 19:12:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:46.275 Malloc1 00:13:46.275 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:46.535 Malloc2 00:13:46.535 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:46.793 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:47.358 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.358 [2024-12-06 19:12:32.389200] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.617 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:47.617 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5e230c75-0037-4726-bdf2-768687933f5e -a 10.0.0.2 -s 4420 -i 4 00:13:47.617 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:47.617 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:47.617 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:47.617 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:47.617 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:49.514 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:49.514 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:49.514 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:49.771 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:49.771 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:49.771 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:49.771 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:49.771 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:49.771 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:49.771 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:49.771 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:49.771 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.771 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:49.771 [ 0]:0x1 00:13:49.771 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:49.771 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.771 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e06c1cb6ae2a4b6da63f47a1cb306063 00:13:49.771 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e06c1cb6ae2a4b6da63f47a1cb306063 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.771 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:50.029 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:50.029 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:50.029 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:50.029 [ 0]:0x1 00:13:50.029 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:50.029 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:50.029 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e06c1cb6ae2a4b6da63f47a1cb306063 00:13:50.029 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e06c1cb6ae2a4b6da63f47a1cb306063 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:50.029 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:50.029 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:50.029 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:50.029 [ 1]:0x2 00:13:50.029 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:50.029 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:50.285 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa0e118c56924f36b8bf46bfa5f88d0b 00:13:50.285 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa0e118c56924f36b8bf46bfa5f88d0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:50.285 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:50.285 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:50.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.285 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.542 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:50.799 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:50.799 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5e230c75-0037-4726-bdf2-768687933f5e -a 10.0.0.2 -s 4420 -i 4 00:13:51.056 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:51.056 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:51.056 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:51.056 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:51.056 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:51.056 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:52.980 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:52.980 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:52.980 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:52.980 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:52.980 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:52.980 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:52.980 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:52.980 19:12:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:53.238 [ 0]:0x2 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa0e118c56924f36b8bf46bfa5f88d0b 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa0e118c56924f36b8bf46bfa5f88d0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:53.238 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:53.496 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:53.496 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:53.496 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:53.496 [ 0]:0x1 00:13:53.496 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:53.496 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:53.496 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e06c1cb6ae2a4b6da63f47a1cb306063 00:13:53.496 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e06c1cb6ae2a4b6da63f47a1cb306063 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:53.496 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:53.496 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:53.496 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:53.496 [ 1]:0x2 00:13:53.496 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:53.496 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:53.496 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa0e118c56924f36b8bf46bfa5f88d0b 00:13:53.496 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa0e118c56924f36b8bf46bfa5f88d0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:53.496 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:53.754 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:53.754 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:53.754 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:53.754 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:53.754 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.754 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:53.754 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.754 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:53.754 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:53.754 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:54.012 [ 0]:0x2 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa0e118c56924f36b8bf46bfa5f88d0b 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa0e118c56924f36b8bf46bfa5f88d0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:54.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.012 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:54.271 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:54.271 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5e230c75-0037-4726-bdf2-768687933f5e -a 10.0.0.2 -s 4420 -i 4 00:13:54.529 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:54.529 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:54.529 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:54.529 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:54.529 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:54.529 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:56.428 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:56.428 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:56.428 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:56.428 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:56.428 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:56.428 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:56.428 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:56.428 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:56.685 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:56.685 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:56.685 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:56.685 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:56.685 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:56.685 [ 0]:0x1 00:13:56.685 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:56.685 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:56.686 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e06c1cb6ae2a4b6da63f47a1cb306063 00:13:56.686 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e06c1cb6ae2a4b6da63f47a1cb306063 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:56.686 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:56.686 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:56.686 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:56.686 [ 1]:0x2 00:13:56.686 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:56.686 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:56.686 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa0e118c56924f36b8bf46bfa5f88d0b 00:13:56.686 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa0e118c56924f36b8bf46bfa5f88d0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:56.686 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:56.944 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:56.944 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:56.944 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:56.944 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:56.944 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:56.944 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:56.944 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:56.944 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:56.944 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:56.944 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:56.944 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:56.944 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:57.203 [ 0]:0x2 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa0e118c56924f36b8bf46bfa5f88d0b 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa0e118c56924f36b8bf46bfa5f88d0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:57.203 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:57.462 [2024-12-06 19:12:42.359199] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:57.462 request: 00:13:57.462 { 00:13:57.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:57.462 "nsid": 2, 00:13:57.462 "host": "nqn.2016-06.io.spdk:host1", 00:13:57.462 "method": "nvmf_ns_remove_host", 00:13:57.462 "req_id": 1 00:13:57.462 } 00:13:57.462 Got JSON-RPC error response 00:13:57.462 response: 00:13:57.462 { 00:13:57.462 "code": -32602, 00:13:57.462 "message": "Invalid parameters" 00:13:57.462 } 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:57.462 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:57.463 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:57.463 [ 0]:0x2 00:13:57.463 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:57.463 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:57.463 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa0e118c56924f36b8bf46bfa5f88d0b 00:13:57.463 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa0e118c56924f36b8bf46bfa5f88d0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.463 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:57.463 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:57.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.722 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=184602 00:13:57.722 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:57.722 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.722 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 184602 /var/tmp/host.sock 00:13:57.722 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 184602 ']' 00:13:57.722 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:57.722 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.722 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:57.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:57.722 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.722 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:57.722 [2024-12-06 19:12:42.567077] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:13:57.722 [2024-12-06 19:12:42.567158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184602 ] 00:13:57.722 [2024-12-06 19:12:42.636801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.722 [2024-12-06 19:12:42.696825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.981 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.981 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:57.981 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.547 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.805 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 84a8add6-22c7-4d2d-9bad-15d33cdaf1d2 00:13:58.805 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:58.805 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 84A8ADD622C74D2D9BAD15D33CDAF1D2 -i 00:13:59.063 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 8a4a9a9f-838c-447c-8314-f0a429e961b9 00:13:59.063 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:59.063 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 8A4A9A9F838C447C8314F0A429E961B9 -i 00:13:59.321 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:59.578 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:59.836 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:59.836 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:00.408 nvme0n1 00:14:00.408 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:00.408 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:00.666 nvme1n2 00:14:00.924 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:00.924 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:00.924 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:00.924 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:00.924 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:01.182 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:01.182 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:01.182 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:01.182 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:01.439 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 84a8add6-22c7-4d2d-9bad-15d33cdaf1d2 == \8\4\a\8\a\d\d\6\-\2\2\c\7\-\4\d\2\d\-\9\b\a\d\-\1\5\d\3\3\c\d\a\f\1\d\2 ]] 00:14:01.439 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:01.439 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:01.439 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:01.696 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 8a4a9a9f-838c-447c-8314-f0a429e961b9 == \8\a\4\a\9\a\9\f\-\8\3\8\c\-\4\4\7\c\-\8\3\1\4\-\f\0\a\4\2\9\e\9\6\1\b\9 ]] 00:14:01.696 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.955 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:02.213 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 84a8add6-22c7-4d2d-9bad-15d33cdaf1d2 00:14:02.213 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:02.213 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 84A8ADD622C74D2D9BAD15D33CDAF1D2 00:14:02.213 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:02.213 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 84A8ADD622C74D2D9BAD15D33CDAF1D2 00:14:02.213 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.213 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.213 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.213 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.213 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.213 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.213 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.213 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:02.213 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 84A8ADD622C74D2D9BAD15D33CDAF1D2 00:14:02.471 [2024-12-06 19:12:47.338066] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:02.471 [2024-12-06 19:12:47.338123] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:02.471 [2024-12-06 19:12:47.338138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.471 request: 00:14:02.471 { 00:14:02.471 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.471 "namespace": { 00:14:02.471 "bdev_name": "invalid", 00:14:02.471 "nsid": 1, 00:14:02.471 "nguid": "84A8ADD622C74D2D9BAD15D33CDAF1D2", 00:14:02.471 "no_auto_visible": false, 00:14:02.471 "hide_metadata": false 00:14:02.471 }, 00:14:02.471 "method": "nvmf_subsystem_add_ns", 00:14:02.471 "req_id": 1 00:14:02.471 } 00:14:02.471 Got JSON-RPC error response 00:14:02.471 response: 00:14:02.471 { 00:14:02.471 "code": -32602, 00:14:02.471 "message": "Invalid parameters" 00:14:02.471 } 00:14:02.471 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:02.471 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:02.471 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:02.471 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:02.471 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 84a8add6-22c7-4d2d-9bad-15d33cdaf1d2 00:14:02.471 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:02.471 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 84A8ADD622C74D2D9BAD15D33CDAF1D2 -i 00:14:02.730 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:04.628 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:04.628 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:04.628 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:04.941 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:04.942 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 184602 00:14:04.942 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 184602 ']' 00:14:04.942 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 184602 00:14:04.942 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:04.942 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.942 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 184602 00:14:04.942 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:04.942 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:04.942 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 184602' 00:14:04.942 killing process with pid 184602 00:14:04.942 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 184602 00:14:04.942 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 184602 00:14:05.505 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:05.763 rmmod nvme_tcp 00:14:05.763 rmmod nvme_fabrics 00:14:05.763 rmmod nvme_keyring 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 183010 ']' 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 183010 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 183010 ']' 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 183010 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 183010 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 183010' 00:14:05.763 killing process with pid 183010 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 183010 00:14:05.763 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 183010 00:14:06.023 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:06.023 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:06.023 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:06.023 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:06.023 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:06.023 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:06.023 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:06.023 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:06.023 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:06.023 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.023 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.023 19:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:08.562 00:14:08.562 real 0m25.357s 00:14:08.562 user 0m37.014s 00:14:08.562 sys 0m4.789s 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:08.562 ************************************ 00:14:08.562 END TEST nvmf_ns_masking 00:14:08.562 ************************************ 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:08.562 ************************************ 00:14:08.562 START TEST nvmf_nvme_cli 00:14:08.562 ************************************ 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:08.562 * Looking for test storage... 00:14:08.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:08.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.562 --rc genhtml_branch_coverage=1 00:14:08.562 --rc genhtml_function_coverage=1 00:14:08.562 --rc genhtml_legend=1 00:14:08.562 --rc geninfo_all_blocks=1 00:14:08.562 --rc geninfo_unexecuted_blocks=1 00:14:08.562 00:14:08.562 ' 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:08.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.562 --rc genhtml_branch_coverage=1 00:14:08.562 --rc genhtml_function_coverage=1 00:14:08.562 --rc genhtml_legend=1 00:14:08.562 --rc geninfo_all_blocks=1 00:14:08.562 --rc geninfo_unexecuted_blocks=1 00:14:08.562 00:14:08.562 ' 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:08.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.562 --rc genhtml_branch_coverage=1 00:14:08.562 --rc genhtml_function_coverage=1 00:14:08.562 --rc genhtml_legend=1 00:14:08.562 --rc geninfo_all_blocks=1 00:14:08.562 --rc geninfo_unexecuted_blocks=1 00:14:08.562 00:14:08.562 ' 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:08.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.562 --rc genhtml_branch_coverage=1 00:14:08.562 --rc genhtml_function_coverage=1 00:14:08.562 --rc genhtml_legend=1 00:14:08.562 --rc geninfo_all_blocks=1 00:14:08.562 --rc geninfo_unexecuted_blocks=1 00:14:08.562 00:14:08.562 ' 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.562 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:08.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:08.563 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:10.470 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.470 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:10.471 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:10.471 Found net devices under 0000:84:00.0: cvl_0_0 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:10.471 Found net devices under 0000:84:00.1: cvl_0_1 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:10.471 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:10.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:14:10.730 00:14:10.730 --- 10.0.0.2 ping statistics --- 00:14:10.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.730 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:14:10.730 00:14:10.730 --- 10.0.0.1 ping statistics --- 00:14:10.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.730 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=187649 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 187649 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 187649 ']' 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.730 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:10.730 [2024-12-06 19:12:55.634894] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:14:10.730 [2024-12-06 19:12:55.634975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.730 [2024-12-06 19:12:55.706272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.730 [2024-12-06 19:12:55.761316] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.730 [2024-12-06 19:12:55.761376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.730 [2024-12-06 19:12:55.761399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.730 [2024-12-06 19:12:55.761409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.730 [2024-12-06 19:12:55.761419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.730 [2024-12-06 19:12:55.763168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.730 [2024-12-06 19:12:55.763231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.730 [2024-12-06 19:12:55.763294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.730 [2024-12-06 19:12:55.763297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:10.989 [2024-12-06 19:12:55.915698] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:10.989 Malloc0 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:10.989 Malloc1 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.989 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:10.989 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.989 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:10.989 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.989 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:10.989 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.989 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.989 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.989 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:10.989 [2024-12-06 19:12:56.014516] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.989 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.989 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:10.989 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.989 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:10.989 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.989 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:14:11.247 00:14:11.247 Discovery Log Number of Records 2, Generation counter 2 00:14:11.247 =====Discovery Log Entry 0====== 00:14:11.247 trtype: tcp 00:14:11.247 adrfam: ipv4 00:14:11.247 subtype: current discovery subsystem 00:14:11.247 treq: not required 00:14:11.247 portid: 0 00:14:11.247 trsvcid: 4420 00:14:11.247 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:11.247 traddr: 10.0.0.2 00:14:11.247 eflags: explicit discovery connections, duplicate discovery information 00:14:11.247 sectype: none 00:14:11.247 =====Discovery Log Entry 1====== 00:14:11.247 trtype: tcp 00:14:11.247 adrfam: ipv4 00:14:11.247 subtype: nvme subsystem 00:14:11.247 treq: not required 00:14:11.247 portid: 0 00:14:11.247 trsvcid: 4420 00:14:11.247 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:11.247 traddr: 10.0.0.2 00:14:11.247 eflags: none 00:14:11.247 sectype: none 00:14:11.247 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:11.247 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:11.247 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:11.247 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:11.247 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:11.247 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:11.247 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:11.247 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:11.247 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:11.247 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:11.247 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:11.814 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:11.814 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:11.814 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:11.814 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:11.814 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:11.814 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:14.344 /dev/nvme0n2 ]] 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:14.344 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:14.344 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:14.344 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:14.344 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:14.344 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:14.344 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:14.344 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:14.344 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:14.344 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:14.344 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:14.344 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:14.344 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:14.344 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:14.603 rmmod nvme_tcp 00:14:14.603 rmmod nvme_fabrics 00:14:14.603 rmmod nvme_keyring 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 187649 ']' 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 187649 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 187649 ']' 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 187649 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 187649 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 187649' 00:14:14.603 killing process with pid 187649 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 187649 00:14:14.603 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 187649 00:14:14.864 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:14.864 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:14.864 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:14.864 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:14.864 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:14.864 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:14.864 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:14.864 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:14.864 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:14.864 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.864 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.864 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.404 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:17.404 00:14:17.404 real 0m8.750s 00:14:17.404 user 0m16.705s 00:14:17.404 sys 0m2.382s 00:14:17.404 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:17.404 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.404 ************************************ 00:14:17.404 END TEST nvmf_nvme_cli 00:14:17.404 ************************************ 00:14:17.404 19:13:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:17.404 19:13:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:17.404 19:13:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:17.404 19:13:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:17.404 19:13:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:17.404 ************************************ 00:14:17.404 START TEST nvmf_vfio_user 00:14:17.404 ************************************ 00:14:17.404 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:17.404 * Looking for test storage... 00:14:17.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.404 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:17.404 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:14:17.404 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:17.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.405 --rc genhtml_branch_coverage=1 00:14:17.405 --rc genhtml_function_coverage=1 00:14:17.405 --rc genhtml_legend=1 00:14:17.405 --rc geninfo_all_blocks=1 00:14:17.405 --rc geninfo_unexecuted_blocks=1 00:14:17.405 00:14:17.405 ' 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:17.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.405 --rc genhtml_branch_coverage=1 00:14:17.405 --rc genhtml_function_coverage=1 00:14:17.405 --rc genhtml_legend=1 00:14:17.405 --rc geninfo_all_blocks=1 00:14:17.405 --rc geninfo_unexecuted_blocks=1 00:14:17.405 00:14:17.405 ' 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:17.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.405 --rc genhtml_branch_coverage=1 00:14:17.405 --rc genhtml_function_coverage=1 00:14:17.405 --rc genhtml_legend=1 00:14:17.405 --rc geninfo_all_blocks=1 00:14:17.405 --rc geninfo_unexecuted_blocks=1 00:14:17.405 00:14:17.405 ' 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:17.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.405 --rc genhtml_branch_coverage=1 00:14:17.405 --rc genhtml_function_coverage=1 00:14:17.405 --rc genhtml_legend=1 00:14:17.405 --rc geninfo_all_blocks=1 00:14:17.405 --rc geninfo_unexecuted_blocks=1 00:14:17.405 00:14:17.405 ' 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:17.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:17.405 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=188585 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 188585' 00:14:17.406 Process pid: 188585 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 188585 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 188585 ']' 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:17.406 [2024-12-06 19:13:02.125011] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:14:17.406 [2024-12-06 19:13:02.125112] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.406 [2024-12-06 19:13:02.192458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.406 [2024-12-06 19:13:02.249987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.406 [2024-12-06 19:13:02.250057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.406 [2024-12-06 19:13:02.250071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.406 [2024-12-06 19:13:02.250082] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.406 [2024-12-06 19:13:02.250091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.406 [2024-12-06 19:13:02.251651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.406 [2024-12-06 19:13:02.251748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.406 [2024-12-06 19:13:02.251781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.406 [2024-12-06 19:13:02.251784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:17.406 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:18.338 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:18.903 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:18.903 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:18.903 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:18.903 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:18.903 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:18.903 Malloc1 00:14:18.903 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:19.469 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:19.469 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:19.725 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:19.725 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:19.725 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:20.288 Malloc2 00:14:20.288 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:20.543 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:20.799 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:21.057 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:21.057 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:21.057 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:21.057 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:21.057 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:21.057 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:21.057 [2024-12-06 19:13:05.911810] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:14:21.057 [2024-12-06 19:13:05.911852] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid189000 ] 00:14:21.057 [2024-12-06 19:13:05.962766] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:21.057 [2024-12-06 19:13:05.968319] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:21.057 [2024-12-06 19:13:05.968351] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc964634000 00:14:21.057 [2024-12-06 19:13:05.969316] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:21.057 [2024-12-06 19:13:05.970313] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:21.057 [2024-12-06 19:13:05.971319] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:21.057 [2024-12-06 19:13:05.972320] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:21.057 [2024-12-06 19:13:05.973330] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:21.057 [2024-12-06 19:13:05.974331] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:21.057 [2024-12-06 19:13:05.975336] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:21.057 [2024-12-06 19:13:05.976340] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:21.057 [2024-12-06 19:13:05.977348] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:21.057 [2024-12-06 19:13:05.977368] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc964629000 00:14:21.057 [2024-12-06 19:13:05.978484] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:21.057 [2024-12-06 19:13:05.994151] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:21.057 [2024-12-06 19:13:05.994194] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:21.057 [2024-12-06 19:13:05.996468] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:21.057 [2024-12-06 19:13:05.996522] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:21.057 [2024-12-06 19:13:05.996619] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:21.057 [2024-12-06 19:13:05.996650] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:21.057 [2024-12-06 19:13:05.996661] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:21.057 [2024-12-06 19:13:05.997463] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:21.057 [2024-12-06 19:13:05.997485] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:21.057 [2024-12-06 19:13:05.997498] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:21.057 [2024-12-06 19:13:05.998466] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:21.057 [2024-12-06 19:13:05.998486] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:21.057 [2024-12-06 19:13:05.998500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:21.057 [2024-12-06 19:13:05.999470] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:21.057 [2024-12-06 19:13:05.999488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:21.057 [2024-12-06 19:13:06.000475] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:21.057 [2024-12-06 19:13:06.000493] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:21.057 [2024-12-06 19:13:06.000502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:21.057 [2024-12-06 19:13:06.000513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:21.057 [2024-12-06 19:13:06.000627] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:21.057 [2024-12-06 19:13:06.000636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:21.057 [2024-12-06 19:13:06.000646] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:21.057 [2024-12-06 19:13:06.001487] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:21.057 [2024-12-06 19:13:06.002490] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:21.057 [2024-12-06 19:13:06.003493] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:21.057 [2024-12-06 19:13:06.004484] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:21.057 [2024-12-06 19:13:06.004626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:21.057 [2024-12-06 19:13:06.005503] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:21.057 [2024-12-06 19:13:06.005521] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:21.057 [2024-12-06 19:13:06.005530] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:21.057 [2024-12-06 19:13:06.005554] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:21.057 [2024-12-06 19:13:06.005572] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:21.057 [2024-12-06 19:13:06.005608] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:21.057 [2024-12-06 19:13:06.005618] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:21.057 [2024-12-06 19:13:06.005624] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.058 [2024-12-06 19:13:06.005644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:21.058 [2024-12-06 19:13:06.005732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:21.058 [2024-12-06 19:13:06.005755] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:21.058 [2024-12-06 19:13:06.005768] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:21.058 [2024-12-06 19:13:06.005776] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:21.058 [2024-12-06 19:13:06.005784] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:21.058 [2024-12-06 19:13:06.005792] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:21.058 [2024-12-06 19:13:06.005799] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:21.058 [2024-12-06 19:13:06.005807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.005820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.005841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:21.058 [2024-12-06 19:13:06.005858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:21.058 [2024-12-06 19:13:06.005875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.058 [2024-12-06 19:13:06.005888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.058 [2024-12-06 19:13:06.005900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.058 [2024-12-06 19:13:06.005911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.058 [2024-12-06 19:13:06.005920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.005936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.005951] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:21.058 [2024-12-06 19:13:06.005965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:21.058 [2024-12-06 19:13:06.005977] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:21.058 [2024-12-06 19:13:06.005986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.005997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.006007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.006034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:21.058 [2024-12-06 19:13:06.006049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:21.058 [2024-12-06 19:13:06.006116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.006133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.006147] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:21.058 [2024-12-06 19:13:06.006154] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:21.058 [2024-12-06 19:13:06.006160] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.058 [2024-12-06 19:13:06.006169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:21.058 [2024-12-06 19:13:06.006185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:21.058 [2024-12-06 19:13:06.006204] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:21.058 [2024-12-06 19:13:06.006226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.006245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.006258] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:21.058 [2024-12-06 19:13:06.006266] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:21.058 [2024-12-06 19:13:06.006271] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.058 [2024-12-06 19:13:06.006280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:21.058 [2024-12-06 19:13:06.006310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:21.058 [2024-12-06 19:13:06.006335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.006350] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.006362] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:21.058 [2024-12-06 19:13:06.006370] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:21.058 [2024-12-06 19:13:06.006375] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.058 [2024-12-06 19:13:06.006384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:21.058 [2024-12-06 19:13:06.006399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:21.058 [2024-12-06 19:13:06.006413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.006424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.006438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.006452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.006460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.006469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.006478] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:21.058 [2024-12-06 19:13:06.006485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:21.058 [2024-12-06 19:13:06.006494] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:21.058 [2024-12-06 19:13:06.006525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:21.058 [2024-12-06 19:13:06.006542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:21.058 [2024-12-06 19:13:06.006561] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:21.058 [2024-12-06 19:13:06.006576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:21.058 [2024-12-06 19:13:06.006592] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:21.058 [2024-12-06 19:13:06.006607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:21.058 [2024-12-06 19:13:06.006622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:21.058 [2024-12-06 19:13:06.006633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:21.058 [2024-12-06 19:13:06.006656] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:21.058 [2024-12-06 19:13:06.006666] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:21.058 [2024-12-06 19:13:06.006672] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:21.058 [2024-12-06 19:13:06.006678] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:21.058 [2024-12-06 19:13:06.006683] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:21.058 [2024-12-06 19:13:06.006692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:21.058 [2024-12-06 19:13:06.006719] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:21.058 [2024-12-06 19:13:06.006740] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:21.058 [2024-12-06 19:13:06.006746] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.058 [2024-12-06 19:13:06.006755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:21.058 [2024-12-06 19:13:06.006767] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:21.058 [2024-12-06 19:13:06.006774] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:21.058 [2024-12-06 19:13:06.006780] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.058 [2024-12-06 19:13:06.006789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:21.058 [2024-12-06 19:13:06.006801] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:21.058 [2024-12-06 19:13:06.006809] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:21.058 [2024-12-06 19:13:06.006815] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.058 [2024-12-06 19:13:06.006823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:21.058 [2024-12-06 19:13:06.006835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:21.058 [2024-12-06 19:13:06.006857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:21.058 [2024-12-06 19:13:06.006875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:21.058 [2024-12-06 19:13:06.006887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:21.058 ===================================================== 00:14:21.058 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:21.058 ===================================================== 00:14:21.058 Controller Capabilities/Features 00:14:21.058 ================================ 00:14:21.058 Vendor ID: 4e58 00:14:21.058 Subsystem Vendor ID: 4e58 00:14:21.058 Serial Number: SPDK1 00:14:21.058 Model Number: SPDK bdev Controller 00:14:21.058 Firmware Version: 25.01 00:14:21.058 Recommended Arb Burst: 6 00:14:21.058 IEEE OUI Identifier: 8d 6b 50 00:14:21.058 Multi-path I/O 00:14:21.058 May have multiple subsystem ports: Yes 00:14:21.058 May have multiple controllers: Yes 00:14:21.058 Associated with SR-IOV VF: No 00:14:21.058 Max Data Transfer Size: 131072 00:14:21.058 Max Number of Namespaces: 32 00:14:21.058 Max Number of I/O Queues: 127 00:14:21.058 NVMe Specification Version (VS): 1.3 00:14:21.058 NVMe Specification Version (Identify): 1.3 00:14:21.058 Maximum Queue Entries: 256 00:14:21.058 Contiguous Queues Required: Yes 00:14:21.058 Arbitration Mechanisms Supported 00:14:21.058 Weighted Round Robin: Not Supported 00:14:21.058 Vendor Specific: Not Supported 00:14:21.058 Reset Timeout: 15000 ms 00:14:21.058 Doorbell Stride: 4 bytes 00:14:21.058 NVM Subsystem Reset: Not Supported 00:14:21.058 Command Sets Supported 00:14:21.058 NVM Command Set: Supported 00:14:21.058 Boot Partition: Not Supported 00:14:21.058 Memory Page Size Minimum: 4096 bytes 00:14:21.058 Memory Page Size Maximum: 4096 bytes 00:14:21.058 Persistent Memory Region: Not Supported 00:14:21.058 Optional Asynchronous Events Supported 00:14:21.058 Namespace Attribute Notices: Supported 00:14:21.058 Firmware Activation Notices: Not Supported 00:14:21.058 ANA Change Notices: Not Supported 00:14:21.058 PLE Aggregate Log Change Notices: Not Supported 00:14:21.058 LBA Status Info Alert Notices: Not Supported 00:14:21.058 EGE Aggregate Log Change Notices: Not Supported 00:14:21.058 Normal NVM Subsystem Shutdown event: Not Supported 00:14:21.059 Zone Descriptor Change Notices: Not Supported 00:14:21.059 Discovery Log Change Notices: Not Supported 00:14:21.059 Controller Attributes 00:14:21.059 128-bit Host Identifier: Supported 00:14:21.059 Non-Operational Permissive Mode: Not Supported 00:14:21.059 NVM Sets: Not Supported 00:14:21.059 Read Recovery Levels: Not Supported 00:14:21.059 Endurance Groups: Not Supported 00:14:21.059 Predictable Latency Mode: Not Supported 00:14:21.059 Traffic Based Keep ALive: Not Supported 00:14:21.059 Namespace Granularity: Not Supported 00:14:21.059 SQ Associations: Not Supported 00:14:21.059 UUID List: Not Supported 00:14:21.059 Multi-Domain Subsystem: Not Supported 00:14:21.059 Fixed Capacity Management: Not Supported 00:14:21.059 Variable Capacity Management: Not Supported 00:14:21.059 Delete Endurance Group: Not Supported 00:14:21.059 Delete NVM Set: Not Supported 00:14:21.059 Extended LBA Formats Supported: Not Supported 00:14:21.059 Flexible Data Placement Supported: Not Supported 00:14:21.059 00:14:21.059 Controller Memory Buffer Support 00:14:21.059 ================================ 00:14:21.059 Supported: No 00:14:21.059 00:14:21.059 Persistent Memory Region Support 00:14:21.059 ================================ 00:14:21.059 Supported: No 00:14:21.059 00:14:21.059 Admin Command Set Attributes 00:14:21.059 ============================ 00:14:21.059 Security Send/Receive: Not Supported 00:14:21.059 Format NVM: Not Supported 00:14:21.059 Firmware Activate/Download: Not Supported 00:14:21.059 Namespace Management: Not Supported 00:14:21.059 Device Self-Test: Not Supported 00:14:21.059 Directives: Not Supported 00:14:21.059 NVMe-MI: Not Supported 00:14:21.059 Virtualization Management: Not Supported 00:14:21.059 Doorbell Buffer Config: Not Supported 00:14:21.059 Get LBA Status Capability: Not Supported 00:14:21.059 Command & Feature Lockdown Capability: Not Supported 00:14:21.059 Abort Command Limit: 4 00:14:21.059 Async Event Request Limit: 4 00:14:21.059 Number of Firmware Slots: N/A 00:14:21.059 Firmware Slot 1 Read-Only: N/A 00:14:21.059 Firmware Activation Without Reset: N/A 00:14:21.059 Multiple Update Detection Support: N/A 00:14:21.059 Firmware Update Granularity: No Information Provided 00:14:21.059 Per-Namespace SMART Log: No 00:14:21.059 Asymmetric Namespace Access Log Page: Not Supported 00:14:21.059 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:21.059 Command Effects Log Page: Supported 00:14:21.059 Get Log Page Extended Data: Supported 00:14:21.059 Telemetry Log Pages: Not Supported 00:14:21.059 Persistent Event Log Pages: Not Supported 00:14:21.059 Supported Log Pages Log Page: May Support 00:14:21.059 Commands Supported & Effects Log Page: Not Supported 00:14:21.059 Feature Identifiers & Effects Log Page:May Support 00:14:21.059 NVMe-MI Commands & Effects Log Page: May Support 00:14:21.059 Data Area 4 for Telemetry Log: Not Supported 00:14:21.059 Error Log Page Entries Supported: 128 00:14:21.059 Keep Alive: Supported 00:14:21.059 Keep Alive Granularity: 10000 ms 00:14:21.059 00:14:21.059 NVM Command Set Attributes 00:14:21.059 ========================== 00:14:21.059 Submission Queue Entry Size 00:14:21.059 Max: 64 00:14:21.059 Min: 64 00:14:21.059 Completion Queue Entry Size 00:14:21.059 Max: 16 00:14:21.059 Min: 16 00:14:21.059 Number of Namespaces: 32 00:14:21.059 Compare Command: Supported 00:14:21.059 Write Uncorrectable Command: Not Supported 00:14:21.059 Dataset Management Command: Supported 00:14:21.059 Write Zeroes Command: Supported 00:14:21.059 Set Features Save Field: Not Supported 00:14:21.059 Reservations: Not Supported 00:14:21.059 Timestamp: Not Supported 00:14:21.059 Copy: Supported 00:14:21.059 Volatile Write Cache: Present 00:14:21.059 Atomic Write Unit (Normal): 1 00:14:21.059 Atomic Write Unit (PFail): 1 00:14:21.059 Atomic Compare & Write Unit: 1 00:14:21.059 Fused Compare & Write: Supported 00:14:21.059 Scatter-Gather List 00:14:21.059 SGL Command Set: Supported (Dword aligned) 00:14:21.059 SGL Keyed: Not Supported 00:14:21.059 SGL Bit Bucket Descriptor: Not Supported 00:14:21.059 SGL Metadata Pointer: Not Supported 00:14:21.059 Oversized SGL: Not Supported 00:14:21.059 SGL Metadata Address: Not Supported 00:14:21.059 SGL Offset: Not Supported 00:14:21.059 Transport SGL Data Block: Not Supported 00:14:21.059 Replay Protected Memory Block: Not Supported 00:14:21.059 00:14:21.059 Firmware Slot Information 00:14:21.059 ========================= 00:14:21.059 Active slot: 1 00:14:21.059 Slot 1 Firmware Revision: 25.01 00:14:21.059 00:14:21.059 00:14:21.059 Commands Supported and Effects 00:14:21.059 ============================== 00:14:21.059 Admin Commands 00:14:21.059 -------------- 00:14:21.059 Get Log Page (02h): Supported 00:14:21.059 Identify (06h): Supported 00:14:21.059 Abort (08h): Supported 00:14:21.059 Set Features (09h): Supported 00:14:21.059 Get Features (0Ah): Supported 00:14:21.059 Asynchronous Event Request (0Ch): Supported 00:14:21.059 Keep Alive (18h): Supported 00:14:21.059 I/O Commands 00:14:21.059 ------------ 00:14:21.059 Flush (00h): Supported LBA-Change 00:14:21.059 Write (01h): Supported LBA-Change 00:14:21.059 Read (02h): Supported 00:14:21.059 Compare (05h): Supported 00:14:21.059 Write Zeroes (08h): Supported LBA-Change 00:14:21.059 Dataset Management (09h): Supported LBA-Change 00:14:21.059 Copy (19h): Supported LBA-Change 00:14:21.059 00:14:21.059 Error Log 00:14:21.059 ========= 00:14:21.059 00:14:21.059 Arbitration 00:14:21.059 =========== 00:14:21.059 Arbitration Burst: 1 00:14:21.059 00:14:21.059 Power Management 00:14:21.059 ================ 00:14:21.059 Number of Power States: 1 00:14:21.059 Current Power State: Power State #0 00:14:21.059 Power State #0: 00:14:21.059 Max Power: 0.00 W 00:14:21.059 Non-Operational State: Operational 00:14:21.059 Entry Latency: Not Reported 00:14:21.059 Exit Latency: Not Reported 00:14:21.059 Relative Read Throughput: 0 00:14:21.059 Relative Read Latency: 0 00:14:21.059 Relative Write Throughput: 0 00:14:21.059 Relative Write Latency: 0 00:14:21.059 Idle Power: Not Reported 00:14:21.059 Active Power: Not Reported 00:14:21.059 Non-Operational Permissive Mode: Not Supported 00:14:21.059 00:14:21.059 Health Information 00:14:21.059 ================== 00:14:21.059 Critical Warnings: 00:14:21.059 Available Spare Space: OK 00:14:21.059 Temperature: OK 00:14:21.059 Device Reliability: OK 00:14:21.059 Read Only: No 00:14:21.059 Volatile Memory Backup: OK 00:14:21.059 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:21.059 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:21.059 Available Spare: 0% 00:14:21.059 Available Sp[2024-12-06 19:13:06.007031] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:21.059 [2024-12-06 19:13:06.007051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:21.059 [2024-12-06 19:13:06.007100] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:21.059 [2024-12-06 19:13:06.007119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.059 [2024-12-06 19:13:06.007130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.059 [2024-12-06 19:13:06.007139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.059 [2024-12-06 19:13:06.007148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.059 [2024-12-06 19:13:06.009732] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:21.059 [2024-12-06 19:13:06.009756] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:21.059 [2024-12-06 19:13:06.010517] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:21.059 [2024-12-06 19:13:06.010590] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:21.059 [2024-12-06 19:13:06.010603] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:21.059 [2024-12-06 19:13:06.011528] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:21.059 [2024-12-06 19:13:06.011552] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:21.059 [2024-12-06 19:13:06.011606] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:21.059 [2024-12-06 19:13:06.014734] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:21.059 are Threshold: 0% 00:14:21.059 Life Percentage Used: 0% 00:14:21.059 Data Units Read: 0 00:14:21.059 Data Units Written: 0 00:14:21.059 Host Read Commands: 0 00:14:21.059 Host Write Commands: 0 00:14:21.059 Controller Busy Time: 0 minutes 00:14:21.059 Power Cycles: 0 00:14:21.059 Power On Hours: 0 hours 00:14:21.059 Unsafe Shutdowns: 0 00:14:21.059 Unrecoverable Media Errors: 0 00:14:21.059 Lifetime Error Log Entries: 0 00:14:21.059 Warning Temperature Time: 0 minutes 00:14:21.059 Critical Temperature Time: 0 minutes 00:14:21.059 00:14:21.059 Number of Queues 00:14:21.059 ================ 00:14:21.059 Number of I/O Submission Queues: 127 00:14:21.059 Number of I/O Completion Queues: 127 00:14:21.059 00:14:21.059 Active Namespaces 00:14:21.059 ================= 00:14:21.059 Namespace ID:1 00:14:21.059 Error Recovery Timeout: Unlimited 00:14:21.059 Command Set Identifier: NVM (00h) 00:14:21.059 Deallocate: Supported 00:14:21.059 Deallocated/Unwritten Error: Not Supported 00:14:21.059 Deallocated Read Value: Unknown 00:14:21.059 Deallocate in Write Zeroes: Not Supported 00:14:21.059 Deallocated Guard Field: 0xFFFF 00:14:21.059 Flush: Supported 00:14:21.059 Reservation: Supported 00:14:21.059 Namespace Sharing Capabilities: Multiple Controllers 00:14:21.059 Size (in LBAs): 131072 (0GiB) 00:14:21.059 Capacity (in LBAs): 131072 (0GiB) 00:14:21.059 Utilization (in LBAs): 131072 (0GiB) 00:14:21.059 NGUID: 73B553ED144F4BFA9D253C5789CAF276 00:14:21.059 UUID: 73b553ed-144f-4bfa-9d25-3c5789caf276 00:14:21.059 Thin Provisioning: Not Supported 00:14:21.059 Per-NS Atomic Units: Yes 00:14:21.059 Atomic Boundary Size (Normal): 0 00:14:21.059 Atomic Boundary Size (PFail): 0 00:14:21.059 Atomic Boundary Offset: 0 00:14:21.059 Maximum Single Source Range Length: 65535 00:14:21.059 Maximum Copy Length: 65535 00:14:21.059 Maximum Source Range Count: 1 00:14:21.060 NGUID/EUI64 Never Reused: No 00:14:21.060 Namespace Write Protected: No 00:14:21.060 Number of LBA Formats: 1 00:14:21.060 Current LBA Format: LBA Format #00 00:14:21.060 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:21.060 00:14:21.060 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:21.317 [2024-12-06 19:13:06.263645] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:26.581 Initializing NVMe Controllers 00:14:26.581 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:26.581 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:26.582 Initialization complete. Launching workers. 00:14:26.582 ======================================================== 00:14:26.582 Latency(us) 00:14:26.582 Device Information : IOPS MiB/s Average min max 00:14:26.582 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 30884.84 120.64 4143.72 1242.21 8727.04 00:14:26.582 ======================================================== 00:14:26.582 Total : 30884.84 120.64 4143.72 1242.21 8727.04 00:14:26.582 00:14:26.582 [2024-12-06 19:13:11.285991] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:26.582 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:26.582 [2024-12-06 19:13:11.550271] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:31.847 Initializing NVMe Controllers 00:14:31.847 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:31.847 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:31.847 Initialization complete. Launching workers. 00:14:31.847 ======================================================== 00:14:31.847 Latency(us) 00:14:31.847 Device Information : IOPS MiB/s Average min max 00:14:31.847 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.06 62.71 7978.28 7049.44 8091.71 00:14:31.847 ======================================================== 00:14:31.847 Total : 16054.06 62.71 7978.28 7049.44 8091.71 00:14:31.847 00:14:31.847 [2024-12-06 19:13:16.591425] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:31.847 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:31.847 [2024-12-06 19:13:16.829532] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:37.109 [2024-12-06 19:13:21.917161] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:37.109 Initializing NVMe Controllers 00:14:37.109 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:37.109 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:37.109 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:37.109 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:37.109 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:37.109 Initialization complete. Launching workers. 00:14:37.109 Starting thread on core 2 00:14:37.109 Starting thread on core 3 00:14:37.109 Starting thread on core 1 00:14:37.109 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:37.368 [2024-12-06 19:13:22.246183] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:40.656 [2024-12-06 19:13:25.307573] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:40.656 Initializing NVMe Controllers 00:14:40.656 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:40.656 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:40.656 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:40.656 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:40.656 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:40.656 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:40.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:40.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:40.656 Initialization complete. Launching workers. 00:14:40.656 Starting thread on core 1 with urgent priority queue 00:14:40.656 Starting thread on core 2 with urgent priority queue 00:14:40.656 Starting thread on core 3 with urgent priority queue 00:14:40.656 Starting thread on core 0 with urgent priority queue 00:14:40.656 SPDK bdev Controller (SPDK1 ) core 0: 6101.67 IO/s 16.39 secs/100000 ios 00:14:40.656 SPDK bdev Controller (SPDK1 ) core 1: 5194.33 IO/s 19.25 secs/100000 ios 00:14:40.656 SPDK bdev Controller (SPDK1 ) core 2: 5707.33 IO/s 17.52 secs/100000 ios 00:14:40.656 SPDK bdev Controller (SPDK1 ) core 3: 4943.33 IO/s 20.23 secs/100000 ios 00:14:40.656 ======================================================== 00:14:40.656 00:14:40.656 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:40.656 [2024-12-06 19:13:25.633297] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:40.656 Initializing NVMe Controllers 00:14:40.656 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:40.656 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:40.656 Namespace ID: 1 size: 0GB 00:14:40.656 Initialization complete. 00:14:40.656 INFO: using host memory buffer for IO 00:14:40.656 Hello world! 00:14:40.656 [2024-12-06 19:13:25.666923] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:40.914 19:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:41.171 [2024-12-06 19:13:25.983224] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:42.105 Initializing NVMe Controllers 00:14:42.105 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:42.105 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:42.105 Initialization complete. Launching workers. 00:14:42.105 submit (in ns) avg, min, max = 6513.1, 3510.0, 4015450.0 00:14:42.105 complete (in ns) avg, min, max = 26849.7, 2088.9, 8003672.2 00:14:42.105 00:14:42.105 Submit histogram 00:14:42.105 ================ 00:14:42.105 Range in us Cumulative Count 00:14:42.105 3.508 - 3.532: 0.1039% ( 13) 00:14:42.105 3.532 - 3.556: 0.6875% ( 73) 00:14:42.105 3.556 - 3.579: 2.9736% ( 286) 00:14:42.105 3.579 - 3.603: 6.5468% ( 447) 00:14:42.105 3.603 - 3.627: 14.0208% ( 935) 00:14:42.105 3.627 - 3.650: 23.1335% ( 1140) 00:14:42.105 3.650 - 3.674: 34.3645% ( 1405) 00:14:42.105 3.674 - 3.698: 42.8617% ( 1063) 00:14:42.105 3.698 - 3.721: 51.1990% ( 1043) 00:14:42.105 3.721 - 3.745: 56.4748% ( 660) 00:14:42.105 3.745 - 3.769: 61.5268% ( 632) 00:14:42.105 3.769 - 3.793: 65.9073% ( 548) 00:14:42.105 3.793 - 3.816: 69.7122% ( 476) 00:14:42.105 3.816 - 3.840: 73.5412% ( 479) 00:14:42.105 3.840 - 3.864: 76.9145% ( 422) 00:14:42.105 3.864 - 3.887: 80.3357% ( 428) 00:14:42.105 3.887 - 3.911: 83.7650% ( 429) 00:14:42.105 3.911 - 3.935: 86.6347% ( 359) 00:14:42.105 3.935 - 3.959: 88.5292% ( 237) 00:14:42.105 3.959 - 3.982: 90.1039% ( 197) 00:14:42.105 3.982 - 4.006: 91.8305% ( 216) 00:14:42.105 4.006 - 4.030: 93.1015% ( 159) 00:14:42.105 4.030 - 4.053: 94.3565% ( 157) 00:14:42.105 4.053 - 4.077: 95.1319% ( 97) 00:14:42.105 4.077 - 4.101: 95.7954% ( 83) 00:14:42.105 4.101 - 4.124: 96.1950% ( 50) 00:14:42.105 4.124 - 4.148: 96.4668% ( 34) 00:14:42.105 4.148 - 4.172: 96.5787% ( 14) 00:14:42.105 4.172 - 4.196: 96.7146% ( 17) 00:14:42.105 4.196 - 4.219: 96.8265% ( 14) 00:14:42.105 4.219 - 4.243: 96.9145% ( 11) 00:14:42.105 4.243 - 4.267: 96.9704% ( 7) 00:14:42.105 4.267 - 4.290: 97.0743% ( 13) 00:14:42.105 4.290 - 4.314: 97.1783% ( 13) 00:14:42.105 4.314 - 4.338: 97.2502% ( 9) 00:14:42.105 4.338 - 4.361: 97.2742% ( 3) 00:14:42.105 4.361 - 4.385: 97.2982% ( 3) 00:14:42.105 4.385 - 4.409: 97.3381% ( 5) 00:14:42.105 4.409 - 4.433: 97.3781% ( 5) 00:14:42.105 4.433 - 4.456: 97.3861% ( 1) 00:14:42.105 4.456 - 4.480: 97.3941% ( 1) 00:14:42.105 4.480 - 4.504: 97.4181% ( 3) 00:14:42.105 4.504 - 4.527: 97.4580% ( 5) 00:14:42.105 4.527 - 4.551: 97.4740% ( 2) 00:14:42.105 4.551 - 4.575: 97.5060% ( 4) 00:14:42.105 4.575 - 4.599: 97.5220% ( 2) 00:14:42.105 4.599 - 4.622: 97.5380% ( 2) 00:14:42.105 4.622 - 4.646: 97.5620% ( 3) 00:14:42.105 4.646 - 4.670: 97.5859% ( 3) 00:14:42.105 4.670 - 4.693: 97.5939% ( 1) 00:14:42.105 4.693 - 4.717: 97.6419% ( 6) 00:14:42.105 4.717 - 4.741: 97.6739% ( 4) 00:14:42.105 4.741 - 4.764: 97.7218% ( 6) 00:14:42.105 4.764 - 4.788: 97.7458% ( 3) 00:14:42.105 4.788 - 4.812: 97.7938% ( 6) 00:14:42.105 4.812 - 4.836: 97.8257% ( 4) 00:14:42.105 4.836 - 4.859: 97.8577% ( 4) 00:14:42.105 4.859 - 4.883: 97.8897% ( 4) 00:14:42.105 4.883 - 4.907: 97.9057% ( 2) 00:14:42.105 4.907 - 4.930: 97.9376% ( 4) 00:14:42.105 4.930 - 4.954: 97.9616% ( 3) 00:14:42.105 4.954 - 4.978: 97.9856% ( 3) 00:14:42.105 4.978 - 5.001: 98.0016% ( 2) 00:14:42.105 5.001 - 5.025: 98.0496% ( 6) 00:14:42.105 5.025 - 5.049: 98.0576% ( 1) 00:14:42.105 5.049 - 5.073: 98.0655% ( 1) 00:14:42.105 5.073 - 5.096: 98.0895% ( 3) 00:14:42.105 5.120 - 5.144: 98.1055% ( 2) 00:14:42.105 5.144 - 5.167: 98.1295% ( 3) 00:14:42.105 5.167 - 5.191: 98.1375% ( 1) 00:14:42.105 5.191 - 5.215: 98.1455% ( 1) 00:14:42.105 5.215 - 5.239: 98.1535% ( 1) 00:14:42.105 5.239 - 5.262: 98.1615% ( 1) 00:14:42.105 5.262 - 5.286: 98.1695% ( 1) 00:14:42.105 5.286 - 5.310: 98.1775% ( 1) 00:14:42.105 5.310 - 5.333: 98.1855% ( 1) 00:14:42.105 5.333 - 5.357: 98.1934% ( 1) 00:14:42.105 5.404 - 5.428: 98.2014% ( 1) 00:14:42.105 5.428 - 5.452: 98.2094% ( 1) 00:14:42.105 5.523 - 5.547: 98.2254% ( 2) 00:14:42.105 5.713 - 5.736: 98.2494% ( 3) 00:14:42.105 6.400 - 6.447: 98.2574% ( 1) 00:14:42.105 6.495 - 6.542: 98.2654% ( 1) 00:14:42.105 6.827 - 6.874: 98.2734% ( 1) 00:14:42.105 7.064 - 7.111: 98.2814% ( 1) 00:14:42.105 7.159 - 7.206: 98.2894% ( 1) 00:14:42.105 7.206 - 7.253: 98.3133% ( 3) 00:14:42.105 7.348 - 7.396: 98.3213% ( 1) 00:14:42.105 7.443 - 7.490: 98.3373% ( 2) 00:14:42.105 7.490 - 7.538: 98.3533% ( 2) 00:14:42.105 7.538 - 7.585: 98.3613% ( 1) 00:14:42.105 7.585 - 7.633: 98.3693% ( 1) 00:14:42.105 7.680 - 7.727: 98.4013% ( 4) 00:14:42.105 7.727 - 7.775: 98.4093% ( 1) 00:14:42.105 7.775 - 7.822: 98.4173% ( 1) 00:14:42.105 7.870 - 7.917: 98.4333% ( 2) 00:14:42.105 7.917 - 7.964: 98.4572% ( 3) 00:14:42.105 8.012 - 8.059: 98.4652% ( 1) 00:14:42.105 8.059 - 8.107: 98.4812% ( 2) 00:14:42.105 8.107 - 8.154: 98.5052% ( 3) 00:14:42.105 8.154 - 8.201: 98.5132% ( 1) 00:14:42.105 8.201 - 8.249: 98.5292% ( 2) 00:14:42.105 8.249 - 8.296: 98.5372% ( 1) 00:14:42.105 8.296 - 8.344: 98.5452% ( 1) 00:14:42.105 8.344 - 8.391: 98.5691% ( 3) 00:14:42.105 8.486 - 8.533: 98.5771% ( 1) 00:14:42.105 8.533 - 8.581: 98.5851% ( 1) 00:14:42.105 8.723 - 8.770: 98.6091% ( 3) 00:14:42.105 8.865 - 8.913: 98.6171% ( 1) 00:14:42.105 8.960 - 9.007: 98.6251% ( 1) 00:14:42.105 9.007 - 9.055: 98.6331% ( 1) 00:14:42.105 9.244 - 9.292: 98.6491% ( 2) 00:14:42.105 9.813 - 9.861: 98.6571% ( 1) 00:14:42.105 10.145 - 10.193: 98.6651% ( 1) 00:14:42.105 10.335 - 10.382: 98.6731% ( 1) 00:14:42.105 10.714 - 10.761: 98.6811% ( 1) 00:14:42.105 10.809 - 10.856: 98.6970% ( 2) 00:14:42.105 11.378 - 11.425: 98.7050% ( 1) 00:14:42.105 11.520 - 11.567: 98.7130% ( 1) 00:14:42.105 11.662 - 11.710: 98.7210% ( 1) 00:14:42.105 11.710 - 11.757: 98.7290% ( 1) 00:14:42.105 11.994 - 12.041: 98.7370% ( 1) 00:14:42.105 12.041 - 12.089: 98.7450% ( 1) 00:14:42.105 12.089 - 12.136: 98.7530% ( 1) 00:14:42.105 12.136 - 12.231: 98.7690% ( 2) 00:14:42.105 12.231 - 12.326: 98.7850% ( 2) 00:14:42.105 12.705 - 12.800: 98.7930% ( 1) 00:14:42.105 12.895 - 12.990: 98.8010% ( 1) 00:14:42.105 13.084 - 13.179: 98.8090% ( 1) 00:14:42.105 13.179 - 13.274: 98.8249% ( 2) 00:14:42.105 13.274 - 13.369: 98.8329% ( 1) 00:14:42.105 13.748 - 13.843: 98.8409% ( 1) 00:14:42.105 13.843 - 13.938: 98.8569% ( 2) 00:14:42.105 14.127 - 14.222: 98.8649% ( 1) 00:14:42.105 14.601 - 14.696: 98.8729% ( 1) 00:14:42.105 15.265 - 15.360: 98.8809% ( 1) 00:14:42.105 15.739 - 15.834: 98.8889% ( 1) 00:14:42.105 16.498 - 16.593: 98.8969% ( 1) 00:14:42.105 17.067 - 17.161: 98.9209% ( 3) 00:14:42.106 17.351 - 17.446: 98.9528% ( 4) 00:14:42.106 17.446 - 17.541: 98.9688% ( 2) 00:14:42.106 17.541 - 17.636: 99.0328% ( 8) 00:14:42.106 17.636 - 17.730: 99.1127% ( 10) 00:14:42.106 17.730 - 17.825: 99.1767% ( 8) 00:14:42.106 17.825 - 17.920: 99.2086% ( 4) 00:14:42.106 17.920 - 18.015: 99.2726% ( 8) 00:14:42.106 18.015 - 18.110: 99.3365% ( 8) 00:14:42.106 18.110 - 18.204: 99.4005% ( 8) 00:14:42.106 18.204 - 18.299: 99.4325% ( 4) 00:14:42.106 18.299 - 18.394: 99.5044% ( 9) 00:14:42.106 18.394 - 18.489: 99.5524% ( 6) 00:14:42.106 18.489 - 18.584: 99.6083% ( 7) 00:14:42.106 18.584 - 18.679: 99.6643% ( 7) 00:14:42.106 18.679 - 18.773: 99.7202% ( 7) 00:14:42.106 18.773 - 18.868: 99.7522% ( 4) 00:14:42.106 18.868 - 18.963: 99.7842% ( 4) 00:14:42.106 18.963 - 19.058: 99.8082% ( 3) 00:14:42.106 19.058 - 19.153: 99.8161% ( 1) 00:14:42.106 19.153 - 19.247: 99.8481% ( 4) 00:14:42.106 19.247 - 19.342: 99.8641% ( 2) 00:14:42.106 19.532 - 19.627: 99.8801% ( 2) 00:14:42.106 20.006 - 20.101: 99.8881% ( 1) 00:14:42.106 22.756 - 22.850: 99.8961% ( 1) 00:14:42.106 23.135 - 23.230: 99.9041% ( 1) 00:14:42.106 23.609 - 23.704: 99.9121% ( 1) 00:14:42.106 24.273 - 24.462: 99.9201% ( 1) 00:14:42.106 26.169 - 26.359: 99.9281% ( 1) 00:14:42.106 26.359 - 26.548: 99.9361% ( 1) 00:14:42.106 3980.705 - 4004.978: 99.9920% ( 7) 00:14:42.106 4004.978 - 4029.250: 100.0000% ( 1) 00:14:42.106 00:14:42.106 Complete histogram 00:14:42.106 ================== 00:14:42.106 Range in us Cumulative Count 00:14:42.106 2.086 - 2.098: 3.3573% ( 420) 00:14:42.106 2.098 - 2.110: 29.5364% ( 3275) 00:14:42.106 2.110 - 2.121: 35.0999% ( 696) 00:14:42.106 2.121 - 2.133: 43.7010% ( 1076) 00:14:42.106 2.133 - 2.145: 58.0735% ( 1798) 00:14:42.106 2.145 - 2.157: 60.7674% ( 337) 00:14:42.106 2.157 - 2.169: 67.0743% ( 789) 00:14:42.106 2.169 - 2.181: 76.0272% ( 1120) 00:14:42.106 2.181 - 2.193: 78.2174% ( 274) 00:14:42.106 2.193 - 2.204: 83.3573% ( 643) 00:14:42.106 2.204 - 2.216: 87.3062% ( 494) 00:14:42.106 2.216 - 2.228: 88.5372% ( 154) 00:14:42.106 2.228 - 2.240: 89.7682% ( 154) 00:14:42.106 2.240 - 2.252: 91.5667% ( 225) 00:14:42.106 2.252 - 2.264: 93.4612% ( 237) 00:14:42.106 2.264 - 2.276: 94.3645% ( 113) 00:14:42.106 2.276 - 2.287: 94.9960% ( 79) 00:14:42.106 2.287 - 2.299: 95.1958% ( 25) 00:14:42.106 2.299 - 2.311: 95.2998% ( 13) 00:14:42.106 2.311 - 2.323: 95.5635% ( 33) 00:14:42.106 2.323 - 2.335: 95.8513% ( 36) 00:14:42.106 2.335 - 2.347: 95.9233% ( 9) 00:14:42.106 2.347 - 2.359: 95.9552% ( 4) 00:14:42.106 2.359 - 2.370: 95.9872% ( 4) 00:14:42.106 2.370 - 2.382: 96.0192% ( 4) 00:14:42.106 2.382 - 2.394: 96.1231% ( 13) 00:14:42.106 2.394 - 2.406: 96.3229% ( 25) 00:14:42.106 2.406 - 2.418: 96.5468% ( 28) 00:14:42.106 2.418 - 2.430: 96.8345% ( 36) 00:14:42.106 2.430 - 2.441: 97.1863% ( 44) 00:14:42.106 2.441 - 2.453: 97.4261% ( 30) 00:14:42.106 2.453 - 2.465: 97.6898% ( 33) 00:14:42.106 2.465 - 2.477: 97.8737% ( 23) 00:14:42.106 2.477 - 2.489: 97.9936% ( 15) 00:14:42.106 2.489 - 2.501: 98.0895% ( 12) 00:14:42.106 2.501 - 2.513: 98.1934% ( 13) 00:14:42.106 2.513 - 2.524: 98.2974% ( 13) 00:14:42.106 2.524 - 2.536: 98.3293% ( 4) 00:14:42.106 2.536 - 2.548: 98.3693% ( 5) 00:14:42.106 2.548 - 2.560: 98.3773% ( 1) 00:14:42.106 2.560 - 2.572: 98.4093% ( 4) 00:14:42.106 2.572 - 2.584: 98.4333% ( 3) 00:14:42.106 2.584 - 2.596: 98.4492% ( 2) 00:14:42.106 2.596 - 2.607: 98.4572% ( 1) 00:14:42.106 2.631 - 2.643: 98.4652% ( 1) 00:14:42.106 2.738 - 2.750: 98.4732% ( 1) 00:14:42.106 2.844 - 2.856: 98.4812% ( 1) 00:14:42.106 2.927 - 2.939: 98.4892% ( 1) 00:14:42.106 3.153 - 3.176: 98.4972% ( 1) 00:14:42.106 3.224 - 3.247: 98.5052% ( 1) 00:14:42.106 3.247 - 3.271: 98.5212% ( 2) 00:14:42.106 3.271 - 3.295: 98.5452% ( 3) 00:14:42.106 3.295 - 3.319: 98.5532% ( 1) 00:14:42.106 3.319 - 3.342: 98.5612% ( 1) 00:14:42.106 3.342 - 3.366: 98.5771% ( 2) 00:14:42.106 3.390 - 3.413: 98.6011% ( 3) 00:14:42.106 3.484 - 3.508: 98.6251% ( 3) 00:14:42.106 3.556 - 3.579: 98.6331% ( 1) 00:14:42.106 3.627 - 3.650: 98.6571% ( 3) 00:14:42.106 3.745 - 3.769: 98.6651% ( 1) 00:14:42.106 3.793 - 3.816: 98.6731% ( 1) 00:14:42.106 3.816 - 3.840: 98.6970% ( 3) 00:14:42.106 3.864 - 3.887: 98.7050% ( 1) 00:14:42.106 3.887 - 3.911: 98.7130% ( 1) 00:14:42.106 3.935 - 3.959: 98.7210% ( 1) 00:14:42.106 3.982 - 4.006: 98.7290% ( 1) 00:14:42.106 4.196 - 4.219: 98.7370% ( 1) 00:14:42.106 5.357 - 5.381: 98.7450% ( 1) 00:14:42.106 5.404 - 5.428: 98.7530% ( 1) 00:14:42.106 5.452 - 5.476: 98.7610% ( 1) 00:14:42.106 5.570 - 5.594: 98.7690% ( 1) 00:14:42.106 5.689 - 5.713: 98.7770% ( 1) 00:14:42.106 5.807 - 5.831: 98.7930% ( 2) 00:14:42.106 5.997 - 6.021: 98.8010% ( 1) 00:14:42.106 6.044 - 6.068: 98.8090% ( 1) 00:14:42.106 6.116 - 6.163: 98.8249% ( 2) 00:14:42.106 6.163 - 6.210: 98.8329% ( 1) 00:14:42.106 6.210 - 6.258: 98.8489% ( 2) 00:14:42.106 6.305 - 6.353: 98.8569% ( 1) 00:14:42.106 6.353 - 6.400: 98.8649% ( 1) 00:14:42.106 6.447 - 6.495: 98.8729% ( 1) 00:14:42.106 6.590 - 6.637: 9[2024-12-06 19:13:27.005542] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:42.106 8.8809% ( 1) 00:14:42.106 6.827 - 6.874: 98.8969% ( 2) 00:14:42.106 6.969 - 7.016: 98.9049% ( 1) 00:14:42.106 7.111 - 7.159: 98.9209% ( 2) 00:14:42.106 7.301 - 7.348: 98.9369% ( 2) 00:14:42.106 7.633 - 7.680: 98.9528% ( 2) 00:14:42.106 8.960 - 9.007: 98.9608% ( 1) 00:14:42.106 9.576 - 9.624: 98.9688% ( 1) 00:14:42.106 15.455 - 15.550: 98.9768% ( 1) 00:14:42.106 15.644 - 15.739: 98.9848% ( 1) 00:14:42.106 15.739 - 15.834: 99.0008% ( 2) 00:14:42.106 15.834 - 15.929: 99.0328% ( 4) 00:14:42.106 16.024 - 16.119: 99.0807% ( 6) 00:14:42.106 16.119 - 16.213: 99.1207% ( 5) 00:14:42.106 16.213 - 16.308: 99.1367% ( 2) 00:14:42.106 16.308 - 16.403: 99.1687% ( 4) 00:14:42.106 16.403 - 16.498: 99.2006% ( 4) 00:14:42.106 16.498 - 16.593: 99.2326% ( 4) 00:14:42.106 16.593 - 16.687: 99.2726% ( 5) 00:14:42.106 16.687 - 16.782: 99.2966% ( 3) 00:14:42.106 16.782 - 16.877: 99.3046% ( 1) 00:14:42.106 16.877 - 16.972: 99.3205% ( 2) 00:14:42.106 16.972 - 17.067: 99.3445% ( 3) 00:14:42.106 17.067 - 17.161: 99.3605% ( 2) 00:14:42.106 17.161 - 17.256: 99.3685% ( 1) 00:14:42.106 17.256 - 17.351: 99.3765% ( 1) 00:14:42.106 17.730 - 17.825: 99.3845% ( 1) 00:14:42.106 18.015 - 18.110: 99.3925% ( 1) 00:14:42.106 18.110 - 18.204: 99.4005% ( 1) 00:14:42.106 18.489 - 18.584: 99.4085% ( 1) 00:14:42.106 25.790 - 25.979: 99.4165% ( 1) 00:14:42.106 3980.705 - 4004.978: 99.8881% ( 59) 00:14:42.106 4004.978 - 4029.250: 99.9600% ( 9) 00:14:42.106 4029.250 - 4053.523: 99.9680% ( 1) 00:14:42.106 7961.410 - 8009.956: 100.0000% ( 4) 00:14:42.106 00:14:42.106 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:42.106 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:42.106 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:42.106 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:42.106 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:42.365 [ 00:14:42.365 { 00:14:42.365 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:42.365 "subtype": "Discovery", 00:14:42.365 "listen_addresses": [], 00:14:42.365 "allow_any_host": true, 00:14:42.365 "hosts": [] 00:14:42.365 }, 00:14:42.365 { 00:14:42.365 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:42.365 "subtype": "NVMe", 00:14:42.365 "listen_addresses": [ 00:14:42.365 { 00:14:42.365 "trtype": "VFIOUSER", 00:14:42.365 "adrfam": "IPv4", 00:14:42.365 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:42.365 "trsvcid": "0" 00:14:42.365 } 00:14:42.365 ], 00:14:42.365 "allow_any_host": true, 00:14:42.365 "hosts": [], 00:14:42.365 "serial_number": "SPDK1", 00:14:42.365 "model_number": "SPDK bdev Controller", 00:14:42.365 "max_namespaces": 32, 00:14:42.365 "min_cntlid": 1, 00:14:42.365 "max_cntlid": 65519, 00:14:42.365 "namespaces": [ 00:14:42.365 { 00:14:42.365 "nsid": 1, 00:14:42.365 "bdev_name": "Malloc1", 00:14:42.365 "name": "Malloc1", 00:14:42.365 "nguid": "73B553ED144F4BFA9D253C5789CAF276", 00:14:42.365 "uuid": "73b553ed-144f-4bfa-9d25-3c5789caf276" 00:14:42.365 } 00:14:42.365 ] 00:14:42.365 }, 00:14:42.365 { 00:14:42.365 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:42.365 "subtype": "NVMe", 00:14:42.365 "listen_addresses": [ 00:14:42.365 { 00:14:42.365 "trtype": "VFIOUSER", 00:14:42.365 "adrfam": "IPv4", 00:14:42.365 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:42.365 "trsvcid": "0" 00:14:42.365 } 00:14:42.365 ], 00:14:42.365 "allow_any_host": true, 00:14:42.365 "hosts": [], 00:14:42.365 "serial_number": "SPDK2", 00:14:42.365 "model_number": "SPDK bdev Controller", 00:14:42.365 "max_namespaces": 32, 00:14:42.365 "min_cntlid": 1, 00:14:42.365 "max_cntlid": 65519, 00:14:42.365 "namespaces": [ 00:14:42.365 { 00:14:42.365 "nsid": 1, 00:14:42.365 "bdev_name": "Malloc2", 00:14:42.365 "name": "Malloc2", 00:14:42.365 "nguid": "7AFFA17D28DE448EBA05C2D053D9382C", 00:14:42.365 "uuid": "7affa17d-28de-448e-ba05-c2d053d9382c" 00:14:42.365 } 00:14:42.365 ] 00:14:42.365 } 00:14:42.365 ] 00:14:42.365 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:42.365 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=191537 00:14:42.365 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:42.365 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:42.365 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:42.365 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:42.365 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:14:42.365 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:14:42.365 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:14:42.624 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:42.624 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:14:42.624 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:14:42.624 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:14:42.624 [2024-12-06 19:13:27.503223] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:42.624 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:42.624 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:42.624 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:42.624 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:42.624 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:42.883 Malloc3 00:14:42.883 19:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:43.141 [2024-12-06 19:13:28.111945] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:43.141 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:43.141 Asynchronous Event Request test 00:14:43.141 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:43.141 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:43.141 Registering asynchronous event callbacks... 00:14:43.141 Starting namespace attribute notice tests for all controllers... 00:14:43.141 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:43.141 aer_cb - Changed Namespace 00:14:43.141 Cleaning up... 00:14:43.400 [ 00:14:43.400 { 00:14:43.400 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:43.400 "subtype": "Discovery", 00:14:43.400 "listen_addresses": [], 00:14:43.400 "allow_any_host": true, 00:14:43.400 "hosts": [] 00:14:43.400 }, 00:14:43.400 { 00:14:43.400 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:43.400 "subtype": "NVMe", 00:14:43.400 "listen_addresses": [ 00:14:43.400 { 00:14:43.400 "trtype": "VFIOUSER", 00:14:43.400 "adrfam": "IPv4", 00:14:43.400 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:43.400 "trsvcid": "0" 00:14:43.400 } 00:14:43.400 ], 00:14:43.400 "allow_any_host": true, 00:14:43.400 "hosts": [], 00:14:43.400 "serial_number": "SPDK1", 00:14:43.400 "model_number": "SPDK bdev Controller", 00:14:43.400 "max_namespaces": 32, 00:14:43.400 "min_cntlid": 1, 00:14:43.400 "max_cntlid": 65519, 00:14:43.400 "namespaces": [ 00:14:43.400 { 00:14:43.400 "nsid": 1, 00:14:43.400 "bdev_name": "Malloc1", 00:14:43.400 "name": "Malloc1", 00:14:43.400 "nguid": "73B553ED144F4BFA9D253C5789CAF276", 00:14:43.400 "uuid": "73b553ed-144f-4bfa-9d25-3c5789caf276" 00:14:43.400 }, 00:14:43.400 { 00:14:43.400 "nsid": 2, 00:14:43.400 "bdev_name": "Malloc3", 00:14:43.400 "name": "Malloc3", 00:14:43.400 "nguid": "7FFBBBB8550949299714C19680687229", 00:14:43.400 "uuid": "7ffbbbb8-5509-4929-9714-c19680687229" 00:14:43.400 } 00:14:43.400 ] 00:14:43.400 }, 00:14:43.400 { 00:14:43.400 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:43.400 "subtype": "NVMe", 00:14:43.400 "listen_addresses": [ 00:14:43.400 { 00:14:43.400 "trtype": "VFIOUSER", 00:14:43.400 "adrfam": "IPv4", 00:14:43.400 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:43.400 "trsvcid": "0" 00:14:43.400 } 00:14:43.400 ], 00:14:43.400 "allow_any_host": true, 00:14:43.400 "hosts": [], 00:14:43.400 "serial_number": "SPDK2", 00:14:43.400 "model_number": "SPDK bdev Controller", 00:14:43.400 "max_namespaces": 32, 00:14:43.400 "min_cntlid": 1, 00:14:43.400 "max_cntlid": 65519, 00:14:43.400 "namespaces": [ 00:14:43.400 { 00:14:43.400 "nsid": 1, 00:14:43.400 "bdev_name": "Malloc2", 00:14:43.400 "name": "Malloc2", 00:14:43.400 "nguid": "7AFFA17D28DE448EBA05C2D053D9382C", 00:14:43.400 "uuid": "7affa17d-28de-448e-ba05-c2d053d9382c" 00:14:43.400 } 00:14:43.400 ] 00:14:43.400 } 00:14:43.400 ] 00:14:43.400 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 191537 00:14:43.400 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:43.400 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:43.400 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:43.400 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:43.400 [2024-12-06 19:13:28.423652] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:14:43.401 [2024-12-06 19:13:28.423692] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid191674 ] 00:14:43.663 [2024-12-06 19:13:28.473550] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:43.663 [2024-12-06 19:13:28.477885] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:43.663 [2024-12-06 19:13:28.477919] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbc7256e000 00:14:43.663 [2024-12-06 19:13:28.478879] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:43.663 [2024-12-06 19:13:28.479882] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:43.663 [2024-12-06 19:13:28.480884] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:43.663 [2024-12-06 19:13:28.481889] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:43.663 [2024-12-06 19:13:28.482894] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:43.663 [2024-12-06 19:13:28.483893] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:43.663 [2024-12-06 19:13:28.484904] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:43.663 [2024-12-06 19:13:28.485915] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:43.663 [2024-12-06 19:13:28.486921] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:43.663 [2024-12-06 19:13:28.486944] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbc72563000 00:14:43.663 [2024-12-06 19:13:28.488080] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:43.663 [2024-12-06 19:13:28.504761] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:43.663 [2024-12-06 19:13:28.504803] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:43.663 [2024-12-06 19:13:28.506892] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:43.663 [2024-12-06 19:13:28.506950] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:43.663 [2024-12-06 19:13:28.507058] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:43.663 [2024-12-06 19:13:28.507085] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:43.663 [2024-12-06 19:13:28.507096] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:43.663 [2024-12-06 19:13:28.507896] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:43.663 [2024-12-06 19:13:28.507919] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:43.663 [2024-12-06 19:13:28.507932] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:43.663 [2024-12-06 19:13:28.508900] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:43.663 [2024-12-06 19:13:28.508922] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:43.664 [2024-12-06 19:13:28.508936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:43.664 [2024-12-06 19:13:28.509903] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:43.664 [2024-12-06 19:13:28.509924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:43.664 [2024-12-06 19:13:28.510912] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:43.664 [2024-12-06 19:13:28.510933] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:43.664 [2024-12-06 19:13:28.510943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:43.664 [2024-12-06 19:13:28.510954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:43.664 [2024-12-06 19:13:28.511065] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:43.664 [2024-12-06 19:13:28.511074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:43.664 [2024-12-06 19:13:28.511082] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:43.664 [2024-12-06 19:13:28.511913] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:43.664 [2024-12-06 19:13:28.512925] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:43.664 [2024-12-06 19:13:28.513937] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:43.664 [2024-12-06 19:13:28.514933] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:43.664 [2024-12-06 19:13:28.515001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:43.664 [2024-12-06 19:13:28.515954] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:43.664 [2024-12-06 19:13:28.515975] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:43.664 [2024-12-06 19:13:28.515989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:43.664 [2024-12-06 19:13:28.516015] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:43.664 [2024-12-06 19:13:28.516042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:43.664 [2024-12-06 19:13:28.516069] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:43.664 [2024-12-06 19:13:28.516079] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:43.664 [2024-12-06 19:13:28.516086] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:43.664 [2024-12-06 19:13:28.516109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:43.664 [2024-12-06 19:13:28.522753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:43.664 [2024-12-06 19:13:28.522778] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:43.664 [2024-12-06 19:13:28.522792] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:43.664 [2024-12-06 19:13:28.522800] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:43.664 [2024-12-06 19:13:28.522810] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:43.664 [2024-12-06 19:13:28.522817] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:43.664 [2024-12-06 19:13:28.522825] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:43.664 [2024-12-06 19:13:28.522832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:43.664 [2024-12-06 19:13:28.522846] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:43.664 [2024-12-06 19:13:28.522863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:43.664 [2024-12-06 19:13:28.530736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:43.664 [2024-12-06 19:13:28.530761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.664 [2024-12-06 19:13:28.530774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.664 [2024-12-06 19:13:28.530786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.664 [2024-12-06 19:13:28.530798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.664 [2024-12-06 19:13:28.530806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:43.664 [2024-12-06 19:13:28.530823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:43.664 [2024-12-06 19:13:28.530838] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:43.664 [2024-12-06 19:13:28.538748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:43.664 [2024-12-06 19:13:28.538768] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:43.664 [2024-12-06 19:13:28.538778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:43.664 [2024-12-06 19:13:28.538789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:43.664 [2024-12-06 19:13:28.538800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:43.664 [2024-12-06 19:13:28.538825] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:43.664 [2024-12-06 19:13:28.546732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:43.664 [2024-12-06 19:13:28.546809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:43.664 [2024-12-06 19:13:28.546827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:43.664 [2024-12-06 19:13:28.546841] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:43.664 [2024-12-06 19:13:28.546849] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:43.664 [2024-12-06 19:13:28.546855] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:43.664 [2024-12-06 19:13:28.546865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:43.664 [2024-12-06 19:13:28.554730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:43.664 [2024-12-06 19:13:28.554765] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:43.664 [2024-12-06 19:13:28.554790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:43.665 [2024-12-06 19:13:28.554806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:43.665 [2024-12-06 19:13:28.554820] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:43.665 [2024-12-06 19:13:28.554829] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:43.665 [2024-12-06 19:13:28.554835] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:43.665 [2024-12-06 19:13:28.554844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:43.665 [2024-12-06 19:13:28.562747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:43.665 [2024-12-06 19:13:28.562776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:43.665 [2024-12-06 19:13:28.562793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:43.665 [2024-12-06 19:13:28.562807] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:43.665 [2024-12-06 19:13:28.562815] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:43.665 [2024-12-06 19:13:28.562825] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:43.665 [2024-12-06 19:13:28.562835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:43.665 [2024-12-06 19:13:28.570732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:43.665 [2024-12-06 19:13:28.570753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:43.665 [2024-12-06 19:13:28.570766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:43.665 [2024-12-06 19:13:28.570781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:43.665 [2024-12-06 19:13:28.570794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:43.665 [2024-12-06 19:13:28.570803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:43.665 [2024-12-06 19:13:28.570811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:43.665 [2024-12-06 19:13:28.570820] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:43.665 [2024-12-06 19:13:28.570827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:43.665 [2024-12-06 19:13:28.570836] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:43.665 [2024-12-06 19:13:28.570860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:43.665 [2024-12-06 19:13:28.578734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:43.665 [2024-12-06 19:13:28.578760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:43.665 [2024-12-06 19:13:28.586734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:43.665 [2024-12-06 19:13:28.586759] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:43.665 [2024-12-06 19:13:28.594733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:43.665 [2024-12-06 19:13:28.594758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:43.665 [2024-12-06 19:13:28.602740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:43.665 [2024-12-06 19:13:28.602773] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:43.665 [2024-12-06 19:13:28.602784] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:43.665 [2024-12-06 19:13:28.602799] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:43.665 [2024-12-06 19:13:28.602804] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:43.665 [2024-12-06 19:13:28.602810] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:43.665 [2024-12-06 19:13:28.602820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:43.665 [2024-12-06 19:13:28.602836] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:43.665 [2024-12-06 19:13:28.602845] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:43.665 [2024-12-06 19:13:28.602851] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:43.665 [2024-12-06 19:13:28.602860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:43.665 [2024-12-06 19:13:28.602871] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:43.665 [2024-12-06 19:13:28.602880] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:43.665 [2024-12-06 19:13:28.602886] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:43.665 [2024-12-06 19:13:28.602895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:43.665 [2024-12-06 19:13:28.602907] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:43.665 [2024-12-06 19:13:28.602916] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:43.665 [2024-12-06 19:13:28.602921] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:43.665 [2024-12-06 19:13:28.602930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:43.665 [2024-12-06 19:13:28.610749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:43.665 [2024-12-06 19:13:28.610777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:43.665 [2024-12-06 19:13:28.610795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:43.665 [2024-12-06 19:13:28.610807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:43.665 ===================================================== 00:14:43.665 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:43.665 ===================================================== 00:14:43.665 Controller Capabilities/Features 00:14:43.665 ================================ 00:14:43.665 Vendor ID: 4e58 00:14:43.665 Subsystem Vendor ID: 4e58 00:14:43.665 Serial Number: SPDK2 00:14:43.665 Model Number: SPDK bdev Controller 00:14:43.665 Firmware Version: 25.01 00:14:43.665 Recommended Arb Burst: 6 00:14:43.665 IEEE OUI Identifier: 8d 6b 50 00:14:43.665 Multi-path I/O 00:14:43.665 May have multiple subsystem ports: Yes 00:14:43.665 May have multiple controllers: Yes 00:14:43.665 Associated with SR-IOV VF: No 00:14:43.665 Max Data Transfer Size: 131072 00:14:43.665 Max Number of Namespaces: 32 00:14:43.665 Max Number of I/O Queues: 127 00:14:43.665 NVMe Specification Version (VS): 1.3 00:14:43.665 NVMe Specification Version (Identify): 1.3 00:14:43.665 Maximum Queue Entries: 256 00:14:43.665 Contiguous Queues Required: Yes 00:14:43.665 Arbitration Mechanisms Supported 00:14:43.665 Weighted Round Robin: Not Supported 00:14:43.665 Vendor Specific: Not Supported 00:14:43.665 Reset Timeout: 15000 ms 00:14:43.666 Doorbell Stride: 4 bytes 00:14:43.666 NVM Subsystem Reset: Not Supported 00:14:43.666 Command Sets Supported 00:14:43.666 NVM Command Set: Supported 00:14:43.666 Boot Partition: Not Supported 00:14:43.666 Memory Page Size Minimum: 4096 bytes 00:14:43.666 Memory Page Size Maximum: 4096 bytes 00:14:43.666 Persistent Memory Region: Not Supported 00:14:43.666 Optional Asynchronous Events Supported 00:14:43.666 Namespace Attribute Notices: Supported 00:14:43.666 Firmware Activation Notices: Not Supported 00:14:43.666 ANA Change Notices: Not Supported 00:14:43.666 PLE Aggregate Log Change Notices: Not Supported 00:14:43.666 LBA Status Info Alert Notices: Not Supported 00:14:43.666 EGE Aggregate Log Change Notices: Not Supported 00:14:43.666 Normal NVM Subsystem Shutdown event: Not Supported 00:14:43.666 Zone Descriptor Change Notices: Not Supported 00:14:43.666 Discovery Log Change Notices: Not Supported 00:14:43.666 Controller Attributes 00:14:43.666 128-bit Host Identifier: Supported 00:14:43.666 Non-Operational Permissive Mode: Not Supported 00:14:43.666 NVM Sets: Not Supported 00:14:43.666 Read Recovery Levels: Not Supported 00:14:43.666 Endurance Groups: Not Supported 00:14:43.666 Predictable Latency Mode: Not Supported 00:14:43.666 Traffic Based Keep ALive: Not Supported 00:14:43.666 Namespace Granularity: Not Supported 00:14:43.666 SQ Associations: Not Supported 00:14:43.666 UUID List: Not Supported 00:14:43.666 Multi-Domain Subsystem: Not Supported 00:14:43.666 Fixed Capacity Management: Not Supported 00:14:43.666 Variable Capacity Management: Not Supported 00:14:43.666 Delete Endurance Group: Not Supported 00:14:43.666 Delete NVM Set: Not Supported 00:14:43.666 Extended LBA Formats Supported: Not Supported 00:14:43.666 Flexible Data Placement Supported: Not Supported 00:14:43.666 00:14:43.666 Controller Memory Buffer Support 00:14:43.666 ================================ 00:14:43.666 Supported: No 00:14:43.666 00:14:43.666 Persistent Memory Region Support 00:14:43.666 ================================ 00:14:43.666 Supported: No 00:14:43.666 00:14:43.666 Admin Command Set Attributes 00:14:43.666 ============================ 00:14:43.666 Security Send/Receive: Not Supported 00:14:43.666 Format NVM: Not Supported 00:14:43.666 Firmware Activate/Download: Not Supported 00:14:43.666 Namespace Management: Not Supported 00:14:43.666 Device Self-Test: Not Supported 00:14:43.666 Directives: Not Supported 00:14:43.666 NVMe-MI: Not Supported 00:14:43.666 Virtualization Management: Not Supported 00:14:43.666 Doorbell Buffer Config: Not Supported 00:14:43.666 Get LBA Status Capability: Not Supported 00:14:43.666 Command & Feature Lockdown Capability: Not Supported 00:14:43.666 Abort Command Limit: 4 00:14:43.666 Async Event Request Limit: 4 00:14:43.666 Number of Firmware Slots: N/A 00:14:43.666 Firmware Slot 1 Read-Only: N/A 00:14:43.666 Firmware Activation Without Reset: N/A 00:14:43.666 Multiple Update Detection Support: N/A 00:14:43.666 Firmware Update Granularity: No Information Provided 00:14:43.666 Per-Namespace SMART Log: No 00:14:43.666 Asymmetric Namespace Access Log Page: Not Supported 00:14:43.666 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:43.666 Command Effects Log Page: Supported 00:14:43.666 Get Log Page Extended Data: Supported 00:14:43.666 Telemetry Log Pages: Not Supported 00:14:43.666 Persistent Event Log Pages: Not Supported 00:14:43.666 Supported Log Pages Log Page: May Support 00:14:43.666 Commands Supported & Effects Log Page: Not Supported 00:14:43.666 Feature Identifiers & Effects Log Page:May Support 00:14:43.666 NVMe-MI Commands & Effects Log Page: May Support 00:14:43.666 Data Area 4 for Telemetry Log: Not Supported 00:14:43.666 Error Log Page Entries Supported: 128 00:14:43.666 Keep Alive: Supported 00:14:43.666 Keep Alive Granularity: 10000 ms 00:14:43.666 00:14:43.666 NVM Command Set Attributes 00:14:43.666 ========================== 00:14:43.666 Submission Queue Entry Size 00:14:43.666 Max: 64 00:14:43.666 Min: 64 00:14:43.666 Completion Queue Entry Size 00:14:43.666 Max: 16 00:14:43.666 Min: 16 00:14:43.666 Number of Namespaces: 32 00:14:43.666 Compare Command: Supported 00:14:43.666 Write Uncorrectable Command: Not Supported 00:14:43.666 Dataset Management Command: Supported 00:14:43.666 Write Zeroes Command: Supported 00:14:43.666 Set Features Save Field: Not Supported 00:14:43.666 Reservations: Not Supported 00:14:43.666 Timestamp: Not Supported 00:14:43.666 Copy: Supported 00:14:43.666 Volatile Write Cache: Present 00:14:43.666 Atomic Write Unit (Normal): 1 00:14:43.666 Atomic Write Unit (PFail): 1 00:14:43.666 Atomic Compare & Write Unit: 1 00:14:43.666 Fused Compare & Write: Supported 00:14:43.666 Scatter-Gather List 00:14:43.666 SGL Command Set: Supported (Dword aligned) 00:14:43.666 SGL Keyed: Not Supported 00:14:43.666 SGL Bit Bucket Descriptor: Not Supported 00:14:43.666 SGL Metadata Pointer: Not Supported 00:14:43.666 Oversized SGL: Not Supported 00:14:43.666 SGL Metadata Address: Not Supported 00:14:43.666 SGL Offset: Not Supported 00:14:43.666 Transport SGL Data Block: Not Supported 00:14:43.666 Replay Protected Memory Block: Not Supported 00:14:43.666 00:14:43.666 Firmware Slot Information 00:14:43.666 ========================= 00:14:43.666 Active slot: 1 00:14:43.666 Slot 1 Firmware Revision: 25.01 00:14:43.666 00:14:43.666 00:14:43.666 Commands Supported and Effects 00:14:43.666 ============================== 00:14:43.666 Admin Commands 00:14:43.666 -------------- 00:14:43.666 Get Log Page (02h): Supported 00:14:43.666 Identify (06h): Supported 00:14:43.666 Abort (08h): Supported 00:14:43.666 Set Features (09h): Supported 00:14:43.666 Get Features (0Ah): Supported 00:14:43.666 Asynchronous Event Request (0Ch): Supported 00:14:43.666 Keep Alive (18h): Supported 00:14:43.666 I/O Commands 00:14:43.666 ------------ 00:14:43.666 Flush (00h): Supported LBA-Change 00:14:43.666 Write (01h): Supported LBA-Change 00:14:43.666 Read (02h): Supported 00:14:43.666 Compare (05h): Supported 00:14:43.666 Write Zeroes (08h): Supported LBA-Change 00:14:43.666 Dataset Management (09h): Supported LBA-Change 00:14:43.666 Copy (19h): Supported LBA-Change 00:14:43.666 00:14:43.666 Error Log 00:14:43.666 ========= 00:14:43.666 00:14:43.666 Arbitration 00:14:43.666 =========== 00:14:43.666 Arbitration Burst: 1 00:14:43.666 00:14:43.666 Power Management 00:14:43.666 ================ 00:14:43.666 Number of Power States: 1 00:14:43.666 Current Power State: Power State #0 00:14:43.666 Power State #0: 00:14:43.666 Max Power: 0.00 W 00:14:43.666 Non-Operational State: Operational 00:14:43.666 Entry Latency: Not Reported 00:14:43.667 Exit Latency: Not Reported 00:14:43.667 Relative Read Throughput: 0 00:14:43.667 Relative Read Latency: 0 00:14:43.667 Relative Write Throughput: 0 00:14:43.667 Relative Write Latency: 0 00:14:43.667 Idle Power: Not Reported 00:14:43.667 Active Power: Not Reported 00:14:43.667 Non-Operational Permissive Mode: Not Supported 00:14:43.667 00:14:43.667 Health Information 00:14:43.667 ================== 00:14:43.667 Critical Warnings: 00:14:43.667 Available Spare Space: OK 00:14:43.667 Temperature: OK 00:14:43.667 Device Reliability: OK 00:14:43.667 Read Only: No 00:14:43.667 Volatile Memory Backup: OK 00:14:43.667 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:43.667 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:43.667 Available Spare: 0% 00:14:43.667 Available Sp[2024-12-06 19:13:28.610932] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:43.667 [2024-12-06 19:13:28.618748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:43.667 [2024-12-06 19:13:28.618800] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:43.667 [2024-12-06 19:13:28.618818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.667 [2024-12-06 19:13:28.618829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.667 [2024-12-06 19:13:28.618838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.667 [2024-12-06 19:13:28.618848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.667 [2024-12-06 19:13:28.618931] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:43.667 [2024-12-06 19:13:28.618955] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:43.667 [2024-12-06 19:13:28.619933] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:43.667 [2024-12-06 19:13:28.620005] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:43.667 [2024-12-06 19:13:28.620020] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:43.667 [2024-12-06 19:13:28.620948] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:43.667 [2024-12-06 19:13:28.620972] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:43.667 [2024-12-06 19:13:28.621025] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:43.667 [2024-12-06 19:13:28.623733] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:43.667 are Threshold: 0% 00:14:43.667 Life Percentage Used: 0% 00:14:43.667 Data Units Read: 0 00:14:43.667 Data Units Written: 0 00:14:43.667 Host Read Commands: 0 00:14:43.667 Host Write Commands: 0 00:14:43.667 Controller Busy Time: 0 minutes 00:14:43.667 Power Cycles: 0 00:14:43.667 Power On Hours: 0 hours 00:14:43.667 Unsafe Shutdowns: 0 00:14:43.667 Unrecoverable Media Errors: 0 00:14:43.667 Lifetime Error Log Entries: 0 00:14:43.667 Warning Temperature Time: 0 minutes 00:14:43.667 Critical Temperature Time: 0 minutes 00:14:43.667 00:14:43.667 Number of Queues 00:14:43.667 ================ 00:14:43.667 Number of I/O Submission Queues: 127 00:14:43.667 Number of I/O Completion Queues: 127 00:14:43.667 00:14:43.667 Active Namespaces 00:14:43.667 ================= 00:14:43.667 Namespace ID:1 00:14:43.667 Error Recovery Timeout: Unlimited 00:14:43.667 Command Set Identifier: NVM (00h) 00:14:43.667 Deallocate: Supported 00:14:43.667 Deallocated/Unwritten Error: Not Supported 00:14:43.667 Deallocated Read Value: Unknown 00:14:43.667 Deallocate in Write Zeroes: Not Supported 00:14:43.667 Deallocated Guard Field: 0xFFFF 00:14:43.667 Flush: Supported 00:14:43.667 Reservation: Supported 00:14:43.667 Namespace Sharing Capabilities: Multiple Controllers 00:14:43.667 Size (in LBAs): 131072 (0GiB) 00:14:43.667 Capacity (in LBAs): 131072 (0GiB) 00:14:43.667 Utilization (in LBAs): 131072 (0GiB) 00:14:43.667 NGUID: 7AFFA17D28DE448EBA05C2D053D9382C 00:14:43.667 UUID: 7affa17d-28de-448e-ba05-c2d053d9382c 00:14:43.667 Thin Provisioning: Not Supported 00:14:43.667 Per-NS Atomic Units: Yes 00:14:43.667 Atomic Boundary Size (Normal): 0 00:14:43.667 Atomic Boundary Size (PFail): 0 00:14:43.667 Atomic Boundary Offset: 0 00:14:43.667 Maximum Single Source Range Length: 65535 00:14:43.667 Maximum Copy Length: 65535 00:14:43.667 Maximum Source Range Count: 1 00:14:43.667 NGUID/EUI64 Never Reused: No 00:14:43.667 Namespace Write Protected: No 00:14:43.667 Number of LBA Formats: 1 00:14:43.667 Current LBA Format: LBA Format #00 00:14:43.667 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:43.667 00:14:43.667 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:43.924 [2024-12-06 19:13:28.861472] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:49.204 Initializing NVMe Controllers 00:14:49.204 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:49.204 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:49.204 Initialization complete. Launching workers. 00:14:49.204 ======================================================== 00:14:49.204 Latency(us) 00:14:49.204 Device Information : IOPS MiB/s Average min max 00:14:49.204 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31390.71 122.62 4076.79 1206.38 8216.43 00:14:49.204 ======================================================== 00:14:49.204 Total : 31390.71 122.62 4076.79 1206.38 8216.43 00:14:49.204 00:14:49.204 [2024-12-06 19:13:33.967169] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:49.204 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:49.204 [2024-12-06 19:13:34.227812] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:54.474 Initializing NVMe Controllers 00:14:54.474 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:54.474 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:54.474 Initialization complete. Launching workers. 00:14:54.474 ======================================================== 00:14:54.474 Latency(us) 00:14:54.474 Device Information : IOPS MiB/s Average min max 00:14:54.474 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30888.74 120.66 4142.93 1212.83 7833.90 00:14:54.474 ======================================================== 00:14:54.474 Total : 30888.74 120.66 4142.93 1212.83 7833.90 00:14:54.474 00:14:54.474 [2024-12-06 19:13:39.248177] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:54.474 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:54.474 [2024-12-06 19:13:39.492108] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:59.751 [2024-12-06 19:13:44.615886] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:59.751 Initializing NVMe Controllers 00:14:59.751 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:59.751 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:59.751 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:59.751 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:59.751 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:59.751 Initialization complete. Launching workers. 00:14:59.751 Starting thread on core 2 00:14:59.751 Starting thread on core 3 00:14:59.751 Starting thread on core 1 00:14:59.751 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:00.009 [2024-12-06 19:13:44.943215] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:03.306 [2024-12-06 19:13:48.016772] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:03.306 Initializing NVMe Controllers 00:15:03.306 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:03.306 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:03.306 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:03.306 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:03.306 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:03.306 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:03.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:03.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:03.306 Initialization complete. Launching workers. 00:15:03.306 Starting thread on core 1 with urgent priority queue 00:15:03.306 Starting thread on core 2 with urgent priority queue 00:15:03.306 Starting thread on core 3 with urgent priority queue 00:15:03.306 Starting thread on core 0 with urgent priority queue 00:15:03.306 SPDK bdev Controller (SPDK2 ) core 0: 5491.67 IO/s 18.21 secs/100000 ios 00:15:03.306 SPDK bdev Controller (SPDK2 ) core 1: 4805.67 IO/s 20.81 secs/100000 ios 00:15:03.306 SPDK bdev Controller (SPDK2 ) core 2: 5775.67 IO/s 17.31 secs/100000 ios 00:15:03.306 SPDK bdev Controller (SPDK2 ) core 3: 5529.33 IO/s 18.09 secs/100000 ios 00:15:03.306 ======================================================== 00:15:03.306 00:15:03.306 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:03.306 [2024-12-06 19:13:48.341246] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:03.306 Initializing NVMe Controllers 00:15:03.306 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:03.306 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:03.306 Namespace ID: 1 size: 0GB 00:15:03.306 Initialization complete. 00:15:03.306 INFO: using host memory buffer for IO 00:15:03.306 Hello world! 00:15:03.306 [2024-12-06 19:13:48.351315] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:03.564 19:13:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:03.822 [2024-12-06 19:13:48.666270] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:04.762 Initializing NVMe Controllers 00:15:04.762 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:04.762 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:04.762 Initialization complete. Launching workers. 00:15:04.762 submit (in ns) avg, min, max = 6185.0, 3485.6, 4017528.9 00:15:04.762 complete (in ns) avg, min, max = 30091.3, 2060.0, 6994160.0 00:15:04.762 00:15:04.762 Submit histogram 00:15:04.762 ================ 00:15:04.762 Range in us Cumulative Count 00:15:04.762 3.484 - 3.508: 0.0080% ( 1) 00:15:04.762 3.508 - 3.532: 0.1847% ( 22) 00:15:04.762 3.532 - 3.556: 0.8189% ( 79) 00:15:04.762 3.556 - 3.579: 2.9546% ( 266) 00:15:04.762 3.579 - 3.603: 7.2421% ( 534) 00:15:04.762 3.603 - 3.627: 14.4440% ( 897) 00:15:04.762 3.627 - 3.650: 23.3320% ( 1107) 00:15:04.762 3.650 - 3.674: 33.5528% ( 1273) 00:15:04.762 3.674 - 3.698: 41.1240% ( 943) 00:15:04.762 3.698 - 3.721: 49.9157% ( 1095) 00:15:04.762 3.721 - 3.745: 55.6724% ( 717) 00:15:04.762 3.745 - 3.769: 60.9394% ( 656) 00:15:04.762 3.769 - 3.793: 64.4801% ( 441) 00:15:04.762 3.793 - 3.816: 67.9888% ( 437) 00:15:04.762 3.816 - 3.840: 71.3850% ( 423) 00:15:04.762 3.840 - 3.864: 74.8535% ( 432) 00:15:04.762 3.864 - 3.887: 78.4585% ( 449) 00:15:04.762 3.887 - 3.911: 82.0795% ( 451) 00:15:04.762 3.911 - 3.935: 85.5560% ( 433) 00:15:04.762 3.935 - 3.959: 87.8764% ( 289) 00:15:04.762 3.959 - 3.982: 89.5303% ( 206) 00:15:04.762 3.982 - 4.006: 91.1040% ( 196) 00:15:04.762 4.006 - 4.030: 92.5171% ( 176) 00:15:04.762 4.030 - 4.053: 93.6331% ( 139) 00:15:04.762 4.053 - 4.077: 94.6367% ( 125) 00:15:04.762 4.077 - 4.101: 95.2629% ( 78) 00:15:04.762 4.101 - 4.124: 95.7688% ( 63) 00:15:04.762 4.124 - 4.148: 96.0177% ( 31) 00:15:04.762 4.148 - 4.172: 96.2585% ( 30) 00:15:04.762 4.172 - 4.196: 96.4673% ( 26) 00:15:04.762 4.196 - 4.219: 96.6279% ( 20) 00:15:04.762 4.219 - 4.243: 96.7242% ( 12) 00:15:04.762 4.243 - 4.267: 96.8687% ( 18) 00:15:04.762 4.267 - 4.290: 96.9651% ( 12) 00:15:04.762 4.290 - 4.314: 97.0373% ( 9) 00:15:04.762 4.314 - 4.338: 97.1176% ( 10) 00:15:04.762 4.338 - 4.361: 97.1658% ( 6) 00:15:04.762 4.361 - 4.385: 97.2220% ( 7) 00:15:04.762 4.385 - 4.409: 97.2381% ( 2) 00:15:04.762 4.409 - 4.433: 97.2702% ( 4) 00:15:04.762 4.433 - 4.456: 97.2943% ( 3) 00:15:04.762 4.527 - 4.551: 97.3023% ( 1) 00:15:04.762 4.551 - 4.575: 97.3103% ( 1) 00:15:04.762 4.622 - 4.646: 97.3264% ( 2) 00:15:04.762 4.646 - 4.670: 97.3424% ( 2) 00:15:04.762 4.670 - 4.693: 97.3665% ( 3) 00:15:04.762 4.693 - 4.717: 97.3826% ( 2) 00:15:04.762 4.717 - 4.741: 97.4468% ( 8) 00:15:04.762 4.741 - 4.764: 97.4950% ( 6) 00:15:04.762 4.764 - 4.788: 97.5512% ( 7) 00:15:04.762 4.788 - 4.812: 97.5994% ( 6) 00:15:04.762 4.812 - 4.836: 97.6556% ( 7) 00:15:04.762 4.836 - 4.859: 97.7358% ( 10) 00:15:04.762 4.883 - 4.907: 97.7840% ( 6) 00:15:04.762 4.907 - 4.930: 97.8242% ( 5) 00:15:04.762 4.930 - 4.954: 97.8804% ( 7) 00:15:04.762 4.954 - 4.978: 97.9205% ( 5) 00:15:04.762 4.978 - 5.001: 97.9767% ( 7) 00:15:04.762 5.001 - 5.025: 98.0169% ( 5) 00:15:04.762 5.025 - 5.049: 98.0329% ( 2) 00:15:04.762 5.049 - 5.073: 98.0650% ( 4) 00:15:04.762 5.073 - 5.096: 98.0971% ( 4) 00:15:04.762 5.096 - 5.120: 98.1132% ( 2) 00:15:04.762 5.120 - 5.144: 98.1212% ( 1) 00:15:04.762 5.144 - 5.167: 98.1373% ( 2) 00:15:04.762 5.262 - 5.286: 98.1453% ( 1) 00:15:04.762 5.286 - 5.310: 98.1694% ( 3) 00:15:04.762 5.333 - 5.357: 98.1774% ( 1) 00:15:04.762 5.357 - 5.381: 98.1855% ( 1) 00:15:04.762 5.381 - 5.404: 98.2015% ( 2) 00:15:04.762 5.404 - 5.428: 98.2096% ( 1) 00:15:04.762 5.499 - 5.523: 98.2176% ( 1) 00:15:04.762 5.523 - 5.547: 98.2256% ( 1) 00:15:04.762 5.547 - 5.570: 98.2417% ( 2) 00:15:04.762 5.570 - 5.594: 98.2577% ( 2) 00:15:04.762 5.641 - 5.665: 98.2738% ( 2) 00:15:04.762 5.665 - 5.689: 98.2818% ( 1) 00:15:04.762 5.689 - 5.713: 98.2898% ( 1) 00:15:04.762 5.831 - 5.855: 98.2979% ( 1) 00:15:04.762 6.258 - 6.305: 98.3059% ( 1) 00:15:04.762 6.305 - 6.353: 98.3139% ( 1) 00:15:04.762 6.353 - 6.400: 98.3220% ( 1) 00:15:04.762 6.447 - 6.495: 98.3300% ( 1) 00:15:04.762 6.495 - 6.542: 98.3380% ( 1) 00:15:04.762 6.590 - 6.637: 98.3460% ( 1) 00:15:04.763 6.732 - 6.779: 98.3541% ( 1) 00:15:04.763 6.827 - 6.874: 98.3701% ( 2) 00:15:04.763 6.874 - 6.921: 98.3862% ( 2) 00:15:04.763 6.921 - 6.969: 98.4022% ( 2) 00:15:04.763 7.016 - 7.064: 98.4103% ( 1) 00:15:04.763 7.064 - 7.111: 98.4344% ( 3) 00:15:04.763 7.206 - 7.253: 98.4424% ( 1) 00:15:04.763 7.253 - 7.301: 98.4504% ( 1) 00:15:04.763 7.301 - 7.348: 98.4585% ( 1) 00:15:04.763 7.348 - 7.396: 98.4906% ( 4) 00:15:04.763 7.490 - 7.538: 98.4986% ( 1) 00:15:04.763 7.680 - 7.727: 98.5066% ( 1) 00:15:04.763 7.822 - 7.870: 98.5307% ( 3) 00:15:04.763 8.012 - 8.059: 98.5387% ( 1) 00:15:04.763 8.107 - 8.154: 98.5548% ( 2) 00:15:04.763 8.296 - 8.344: 98.5628% ( 1) 00:15:04.763 8.344 - 8.391: 98.5709% ( 1) 00:15:04.763 8.391 - 8.439: 98.5789% ( 1) 00:15:04.763 8.486 - 8.533: 98.5869% ( 1) 00:15:04.763 8.533 - 8.581: 98.5949% ( 1) 00:15:04.763 8.628 - 8.676: 98.6110% ( 2) 00:15:04.763 8.676 - 8.723: 98.6190% ( 1) 00:15:04.763 8.723 - 8.770: 98.6271% ( 1) 00:15:04.763 8.865 - 8.913: 98.6351% ( 1) 00:15:04.763 8.913 - 8.960: 98.6511% ( 2) 00:15:04.763 9.007 - 9.055: 98.6672% ( 2) 00:15:04.763 9.055 - 9.102: 98.6833% ( 2) 00:15:04.763 9.102 - 9.150: 98.7314% ( 6) 00:15:04.763 9.387 - 9.434: 98.7395% ( 1) 00:15:04.763 9.434 - 9.481: 98.7555% ( 2) 00:15:04.763 9.481 - 9.529: 98.7635% ( 1) 00:15:04.763 9.576 - 9.624: 98.7716% ( 1) 00:15:04.763 9.624 - 9.671: 98.7796% ( 1) 00:15:04.763 9.719 - 9.766: 98.7957% ( 2) 00:15:04.763 9.813 - 9.861: 98.8037% ( 1) 00:15:04.763 9.861 - 9.908: 98.8117% ( 1) 00:15:04.763 9.956 - 10.003: 98.8198% ( 1) 00:15:04.763 10.050 - 10.098: 98.8278% ( 1) 00:15:04.763 10.193 - 10.240: 98.8358% ( 1) 00:15:04.763 10.524 - 10.572: 98.8438% ( 1) 00:15:04.763 10.667 - 10.714: 98.8599% ( 2) 00:15:04.763 11.046 - 11.093: 98.8679% ( 1) 00:15:04.763 11.236 - 11.283: 98.8760% ( 1) 00:15:04.763 11.283 - 11.330: 98.8840% ( 1) 00:15:04.763 11.473 - 11.520: 98.8920% ( 1) 00:15:04.763 11.662 - 11.710: 98.9000% ( 1) 00:15:04.763 11.710 - 11.757: 98.9081% ( 1) 00:15:04.763 11.757 - 11.804: 98.9161% ( 1) 00:15:04.763 11.804 - 11.852: 98.9562% ( 5) 00:15:04.763 11.852 - 11.899: 98.9723% ( 2) 00:15:04.763 11.947 - 11.994: 98.9803% ( 1) 00:15:04.763 11.994 - 12.041: 98.9884% ( 1) 00:15:04.763 12.089 - 12.136: 98.9964% ( 1) 00:15:04.763 12.136 - 12.231: 99.0044% ( 1) 00:15:04.763 12.421 - 12.516: 99.0124% ( 1) 00:15:04.763 12.516 - 12.610: 99.0285% ( 2) 00:15:04.763 12.800 - 12.895: 99.0606% ( 4) 00:15:04.763 12.990 - 13.084: 99.0686% ( 1) 00:15:04.763 13.084 - 13.179: 99.0847% ( 2) 00:15:04.763 13.274 - 13.369: 99.0927% ( 1) 00:15:04.763 13.464 - 13.559: 99.1008% ( 1) 00:15:04.763 13.559 - 13.653: 99.1088% ( 1) 00:15:04.763 13.653 - 13.748: 99.1168% ( 1) 00:15:04.763 13.748 - 13.843: 99.1409% ( 3) 00:15:04.763 13.843 - 13.938: 99.1570% ( 2) 00:15:04.763 14.317 - 14.412: 99.1650% ( 1) 00:15:04.763 14.507 - 14.601: 99.1811% ( 2) 00:15:04.763 14.601 - 14.696: 99.1891% ( 1) 00:15:04.763 14.696 - 14.791: 99.1971% ( 1) 00:15:04.763 14.791 - 14.886: 99.2132% ( 2) 00:15:04.763 15.455 - 15.550: 99.2212% ( 1) 00:15:04.763 17.256 - 17.351: 99.2292% ( 1) 00:15:04.763 17.351 - 17.446: 99.2774% ( 6) 00:15:04.763 17.446 - 17.541: 99.3095% ( 4) 00:15:04.763 17.541 - 17.636: 99.3416% ( 4) 00:15:04.763 17.636 - 17.730: 99.3898% ( 6) 00:15:04.763 17.730 - 17.825: 99.4219% ( 4) 00:15:04.763 17.825 - 17.920: 99.4621% ( 5) 00:15:04.763 17.920 - 18.015: 99.5102% ( 6) 00:15:04.763 18.015 - 18.110: 99.5825% ( 9) 00:15:04.763 18.110 - 18.204: 99.6628% ( 10) 00:15:04.763 18.204 - 18.299: 99.7270% ( 8) 00:15:04.763 18.299 - 18.394: 99.7350% ( 1) 00:15:04.763 18.394 - 18.489: 99.7431% ( 1) 00:15:04.763 18.489 - 18.584: 99.7832% ( 5) 00:15:04.763 18.584 - 18.679: 99.7993% ( 2) 00:15:04.763 18.679 - 18.773: 99.8153% ( 2) 00:15:04.763 18.773 - 18.868: 99.8394% ( 3) 00:15:04.763 18.868 - 18.963: 99.8715% ( 4) 00:15:04.763 19.058 - 19.153: 99.8796% ( 1) 00:15:04.763 19.153 - 19.247: 99.8956% ( 2) 00:15:04.763 19.532 - 19.627: 99.9037% ( 1) 00:15:04.763 19.816 - 19.911: 99.9117% ( 1) 00:15:04.763 22.566 - 22.661: 99.9197% ( 1) 00:15:04.763 25.410 - 25.600: 99.9277% ( 1) 00:15:04.763 26.169 - 26.359: 99.9358% ( 1) 00:15:04.763 27.307 - 27.496: 99.9438% ( 1) 00:15:04.763 3980.705 - 4004.978: 99.9839% ( 5) 00:15:04.763 4004.978 - 4029.250: 100.0000% ( 2) 00:15:04.763 00:15:04.763 Complete histogram 00:15:04.763 ================== 00:15:04.763 Range in us Cumulative Count 00:15:04.763 2.050 - 2.062: 0.0080% ( 1) 00:15:04.763 2.062 - 2.074: 9.2894% ( 1156) 00:15:04.763 2.074 - 2.086: 32.7178% ( 2918) 00:15:04.763 2.086 - 2.098: 34.9177% ( 274) 00:15:04.763 2.098 - 2.110: 50.1887% ( 1902) 00:15:04.763 2.110 - 2.121: 60.7547% ( 1316) 00:15:04.763 2.121 - 2.133: 62.9065% ( 268) 00:15:04.763 2.133 - 2.145: 70.6222% ( 961) 00:15:04.763 2.145 - 2.157: 76.3870% ( 718) 00:15:04.763 2.157 - 2.169: 77.6234% ( 154) 00:15:04.763 2.169 - 2.181: 83.4524% ( 726) 00:15:04.763 2.181 - 2.193: 86.6479% ( 398) 00:15:04.763 2.193 - 2.204: 87.3786% ( 91) 00:15:04.763 2.204 - 2.216: 89.1610% ( 222) 00:15:04.763 2.216 - 2.228: 91.7704% ( 325) 00:15:04.763 2.228 - 2.240: 92.9667% ( 149) 00:15:04.763 2.240 - 2.252: 93.9944% ( 128) 00:15:04.763 2.252 - 2.264: 94.5243% ( 66) 00:15:04.763 2.264 - 2.276: 94.6768% ( 19) 00:15:04.763 2.276 - 2.287: 94.9819% ( 38) 00:15:04.763 2.287 - 2.299: 95.3352% ( 44) 00:15:04.763 2.299 - 2.311: 95.6002% ( 33) 00:15:04.763 2.311 - 2.323: 95.6804% ( 10) 00:15:04.763 2.323 - 2.335: 95.7045% ( 3) 00:15:04.763 2.335 - 2.347: 95.7367% ( 4) 00:15:04.763 2.347 - 2.359: 95.8651% ( 16) 00:15:04.763 2.359 - 2.370: 96.1782% ( 39) 00:15:04.763 2.370 - 2.382: 96.4914% ( 39) 00:15:04.763 2.382 - 2.394: 96.8045% ( 39) 00:15:04.763 2.394 - 2.406: 97.2381% ( 54) 00:15:04.763 2.406 - 2.418: 97.4227% ( 23) 00:15:04.763 2.418 - 2.430: 97.6234% ( 25) 00:15:04.763 2.430 - 2.441: 97.8081% ( 23) 00:15:04.763 2.441 - 2.453: 97.8964% ( 11) 00:15:04.763 2.453 - 2.465: 98.0570% ( 20) 00:15:04.763 2.465 - 2.477: 98.1694% ( 14) 00:15:04.763 2.477 - 2.489: 98.2738% ( 13) 00:15:04.763 2.489 - 2.501: 98.3621% ( 11) 00:15:04.763 2.501 - 2.513: 98.3782% ( 2) 00:15:04.763 2.513 - 2.524: 98.3942% ( 2) 00:15:04.763 2.524 - 2.536: 98.4022% ( 1) 00:15:04.763 2.536 - 2.548: 98.4263% ( 3) 00:15:04.763 2.548 - 2.560: 98.4504% ( 3) 00:15:04.763 2.572 - 2.584: 98.4585% ( 1) 00:15:04.763 2.584 - 2.596: 98.4745% ( 2) 00:15:04.763 2.607 - 2.619: 98.4825% ( 1) 00:15:04.763 2.631 - 2.643: 98.4906% ( 1) 00:15:04.763 2.643 - 2.655: 98.4986% ( 1) 00:15:04.763 2.750 - 2.761: 98.5066% ( 1) 00:15:04.763 2.797 - 2.809: 98.5147% ( 1) 00:15:04.763 2.868 - 2.880: 98.5227% ( 1) 00:15:04.763 3.390 - 3.413: 98.5307% ( 1) 00:15:04.763 3.413 - 3.437: 98.5387% ( 1) 00:15:04.763 3.437 - 3.461: 98.5468% ( 1) 00:15:04.763 3.484 - 3.508: 98.5628% ( 2) 00:15:04.763 3.579 - 3.603: 98.5709% ( 1) 00:15:04.763 3.603 - 3.627: 98.5789% ( 1) 00:15:04.763 3.650 - 3.674: 98.5869% ( 1) 00:15:04.763 3.674 - 3.698: 98.5949% ( 1) 00:15:04.763 3.721 - 3.745: 98.6110% ( 2) 00:15:04.763 3.745 - 3.769: 98.6190% ( 1) 00:15:04.763 3.769 - 3.793: 98.6351% ( 2) 00:15:04.763 3.793 - 3.816: 98.6431% ( 1) 00:15:04.763 3.887 - 3.911: 98.6511% ( 1) 00:15:04.763 5.144 - 5.167: 98.6592% ( 1) 00:15:04.763 5.286 - 5.310: 98.6672% ( 1) 00:15:04.763 5.570 - 5.594: 98.6752% ( 1) 00:15:04.763 5.736 - 5.760: 9[2024-12-06 19:13:49.758526] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:04.763 8.6833% ( 1) 00:15:04.763 5.784 - 5.807: 98.6993% ( 2) 00:15:04.763 5.831 - 5.855: 98.7073% ( 1) 00:15:04.763 5.950 - 5.973: 98.7154% ( 1) 00:15:04.763 5.997 - 6.021: 98.7234% ( 1) 00:15:04.763 6.116 - 6.163: 98.7314% ( 1) 00:15:04.763 6.163 - 6.210: 98.7395% ( 1) 00:15:04.763 6.258 - 6.305: 98.7475% ( 1) 00:15:04.763 6.305 - 6.353: 98.7555% ( 1) 00:15:04.763 6.353 - 6.400: 98.7635% ( 1) 00:15:04.763 6.447 - 6.495: 98.7876% ( 3) 00:15:04.763 6.542 - 6.590: 98.7957% ( 1) 00:15:04.763 6.779 - 6.827: 98.8037% ( 1) 00:15:04.763 7.111 - 7.159: 98.8117% ( 1) 00:15:04.763 7.822 - 7.870: 98.8198% ( 1) 00:15:04.764 9.197 - 9.244: 98.8278% ( 1) 00:15:04.764 9.861 - 9.908: 98.8358% ( 1) 00:15:04.764 15.360 - 15.455: 98.8438% ( 1) 00:15:04.764 15.550 - 15.644: 98.8519% ( 1) 00:15:04.764 15.644 - 15.739: 98.8679% ( 2) 00:15:04.764 15.739 - 15.834: 98.8920% ( 3) 00:15:04.764 15.834 - 15.929: 98.9081% ( 2) 00:15:04.764 15.929 - 16.024: 98.9482% ( 5) 00:15:04.764 16.024 - 16.119: 98.9803% ( 4) 00:15:04.764 16.119 - 16.213: 98.9884% ( 1) 00:15:04.764 16.213 - 16.308: 99.0044% ( 2) 00:15:04.764 16.308 - 16.403: 99.0446% ( 5) 00:15:04.764 16.403 - 16.498: 99.0767% ( 4) 00:15:04.764 16.498 - 16.593: 99.1008% ( 3) 00:15:04.764 16.593 - 16.687: 99.1168% ( 2) 00:15:04.764 16.687 - 16.782: 99.1248% ( 1) 00:15:04.764 16.877 - 16.972: 99.1409% ( 2) 00:15:04.764 16.972 - 17.067: 99.1650% ( 3) 00:15:04.764 17.067 - 17.161: 99.1811% ( 2) 00:15:04.764 17.161 - 17.256: 99.1891% ( 1) 00:15:04.764 17.256 - 17.351: 99.1971% ( 1) 00:15:04.764 17.351 - 17.446: 99.2051% ( 1) 00:15:04.764 17.541 - 17.636: 99.2212% ( 2) 00:15:04.764 17.636 - 17.730: 99.2292% ( 1) 00:15:04.764 17.730 - 17.825: 99.2373% ( 1) 00:15:04.764 17.825 - 17.920: 99.2613% ( 3) 00:15:04.764 17.920 - 18.015: 99.2774% ( 2) 00:15:04.764 18.489 - 18.584: 99.2854% ( 1) 00:15:04.764 18.584 - 18.679: 99.3015% ( 2) 00:15:04.764 18.868 - 18.963: 99.3095% ( 1) 00:15:04.764 3980.705 - 4004.978: 99.7270% ( 52) 00:15:04.764 4004.978 - 4029.250: 99.9920% ( 33) 00:15:04.764 6990.507 - 7039.052: 100.0000% ( 1) 00:15:04.764 00:15:04.764 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:05.021 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:05.021 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:05.021 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:05.021 19:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:05.279 [ 00:15:05.279 { 00:15:05.279 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:05.279 "subtype": "Discovery", 00:15:05.279 "listen_addresses": [], 00:15:05.279 "allow_any_host": true, 00:15:05.279 "hosts": [] 00:15:05.279 }, 00:15:05.279 { 00:15:05.279 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:05.279 "subtype": "NVMe", 00:15:05.279 "listen_addresses": [ 00:15:05.279 { 00:15:05.279 "trtype": "VFIOUSER", 00:15:05.279 "adrfam": "IPv4", 00:15:05.279 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:05.279 "trsvcid": "0" 00:15:05.279 } 00:15:05.279 ], 00:15:05.279 "allow_any_host": true, 00:15:05.279 "hosts": [], 00:15:05.279 "serial_number": "SPDK1", 00:15:05.279 "model_number": "SPDK bdev Controller", 00:15:05.279 "max_namespaces": 32, 00:15:05.279 "min_cntlid": 1, 00:15:05.279 "max_cntlid": 65519, 00:15:05.279 "namespaces": [ 00:15:05.279 { 00:15:05.279 "nsid": 1, 00:15:05.279 "bdev_name": "Malloc1", 00:15:05.279 "name": "Malloc1", 00:15:05.279 "nguid": "73B553ED144F4BFA9D253C5789CAF276", 00:15:05.279 "uuid": "73b553ed-144f-4bfa-9d25-3c5789caf276" 00:15:05.279 }, 00:15:05.279 { 00:15:05.279 "nsid": 2, 00:15:05.279 "bdev_name": "Malloc3", 00:15:05.279 "name": "Malloc3", 00:15:05.279 "nguid": "7FFBBBB8550949299714C19680687229", 00:15:05.279 "uuid": "7ffbbbb8-5509-4929-9714-c19680687229" 00:15:05.279 } 00:15:05.279 ] 00:15:05.279 }, 00:15:05.279 { 00:15:05.279 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:05.279 "subtype": "NVMe", 00:15:05.279 "listen_addresses": [ 00:15:05.279 { 00:15:05.279 "trtype": "VFIOUSER", 00:15:05.279 "adrfam": "IPv4", 00:15:05.279 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:05.279 "trsvcid": "0" 00:15:05.279 } 00:15:05.279 ], 00:15:05.279 "allow_any_host": true, 00:15:05.279 "hosts": [], 00:15:05.279 "serial_number": "SPDK2", 00:15:05.279 "model_number": "SPDK bdev Controller", 00:15:05.279 "max_namespaces": 32, 00:15:05.279 "min_cntlid": 1, 00:15:05.279 "max_cntlid": 65519, 00:15:05.279 "namespaces": [ 00:15:05.279 { 00:15:05.279 "nsid": 1, 00:15:05.279 "bdev_name": "Malloc2", 00:15:05.279 "name": "Malloc2", 00:15:05.279 "nguid": "7AFFA17D28DE448EBA05C2D053D9382C", 00:15:05.279 "uuid": "7affa17d-28de-448e-ba05-c2d053d9382c" 00:15:05.279 } 00:15:05.279 ] 00:15:05.279 } 00:15:05.279 ] 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=194205 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:15:05.279 [2024-12-06 19:13:50.279341] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:05.279 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:05.862 Malloc4 00:15:05.862 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:05.862 [2024-12-06 19:13:50.886960] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:05.863 19:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:06.120 Asynchronous Event Request test 00:15:06.120 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:06.120 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:06.120 Registering asynchronous event callbacks... 00:15:06.120 Starting namespace attribute notice tests for all controllers... 00:15:06.120 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:06.120 aer_cb - Changed Namespace 00:15:06.120 Cleaning up... 00:15:06.120 [ 00:15:06.120 { 00:15:06.120 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:06.120 "subtype": "Discovery", 00:15:06.120 "listen_addresses": [], 00:15:06.120 "allow_any_host": true, 00:15:06.120 "hosts": [] 00:15:06.120 }, 00:15:06.120 { 00:15:06.120 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:06.120 "subtype": "NVMe", 00:15:06.120 "listen_addresses": [ 00:15:06.120 { 00:15:06.120 "trtype": "VFIOUSER", 00:15:06.120 "adrfam": "IPv4", 00:15:06.120 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:06.120 "trsvcid": "0" 00:15:06.120 } 00:15:06.120 ], 00:15:06.120 "allow_any_host": true, 00:15:06.120 "hosts": [], 00:15:06.120 "serial_number": "SPDK1", 00:15:06.120 "model_number": "SPDK bdev Controller", 00:15:06.120 "max_namespaces": 32, 00:15:06.120 "min_cntlid": 1, 00:15:06.120 "max_cntlid": 65519, 00:15:06.120 "namespaces": [ 00:15:06.120 { 00:15:06.120 "nsid": 1, 00:15:06.120 "bdev_name": "Malloc1", 00:15:06.120 "name": "Malloc1", 00:15:06.120 "nguid": "73B553ED144F4BFA9D253C5789CAF276", 00:15:06.120 "uuid": "73b553ed-144f-4bfa-9d25-3c5789caf276" 00:15:06.120 }, 00:15:06.120 { 00:15:06.120 "nsid": 2, 00:15:06.120 "bdev_name": "Malloc3", 00:15:06.120 "name": "Malloc3", 00:15:06.120 "nguid": "7FFBBBB8550949299714C19680687229", 00:15:06.120 "uuid": "7ffbbbb8-5509-4929-9714-c19680687229" 00:15:06.120 } 00:15:06.120 ] 00:15:06.120 }, 00:15:06.120 { 00:15:06.120 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:06.120 "subtype": "NVMe", 00:15:06.120 "listen_addresses": [ 00:15:06.120 { 00:15:06.120 "trtype": "VFIOUSER", 00:15:06.120 "adrfam": "IPv4", 00:15:06.121 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:06.121 "trsvcid": "0" 00:15:06.121 } 00:15:06.121 ], 00:15:06.121 "allow_any_host": true, 00:15:06.121 "hosts": [], 00:15:06.121 "serial_number": "SPDK2", 00:15:06.121 "model_number": "SPDK bdev Controller", 00:15:06.121 "max_namespaces": 32, 00:15:06.121 "min_cntlid": 1, 00:15:06.121 "max_cntlid": 65519, 00:15:06.121 "namespaces": [ 00:15:06.121 { 00:15:06.121 "nsid": 1, 00:15:06.121 "bdev_name": "Malloc2", 00:15:06.121 "name": "Malloc2", 00:15:06.121 "nguid": "7AFFA17D28DE448EBA05C2D053D9382C", 00:15:06.121 "uuid": "7affa17d-28de-448e-ba05-c2d053d9382c" 00:15:06.121 }, 00:15:06.121 { 00:15:06.121 "nsid": 2, 00:15:06.121 "bdev_name": "Malloc4", 00:15:06.121 "name": "Malloc4", 00:15:06.121 "nguid": "647BCC104D794B768FA283FD2C82CDA7", 00:15:06.121 "uuid": "647bcc10-4d79-4b76-8fa2-83fd2c82cda7" 00:15:06.121 } 00:15:06.121 ] 00:15:06.121 } 00:15:06.121 ] 00:15:06.378 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 194205 00:15:06.378 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:06.378 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 188585 00:15:06.378 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 188585 ']' 00:15:06.378 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 188585 00:15:06.378 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:06.378 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:06.378 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 188585 00:15:06.378 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:06.378 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:06.378 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 188585' 00:15:06.378 killing process with pid 188585 00:15:06.378 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 188585 00:15:06.378 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 188585 00:15:06.639 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:06.639 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:06.639 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:06.639 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:06.639 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:06.639 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=194349 00:15:06.639 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:06.639 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 194349' 00:15:06.639 Process pid: 194349 00:15:06.639 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:06.639 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 194349 00:15:06.639 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 194349 ']' 00:15:06.639 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.639 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.639 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.639 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.639 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:06.639 [2024-12-06 19:13:51.554486] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:06.639 [2024-12-06 19:13:51.555528] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:15:06.639 [2024-12-06 19:13:51.555596] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.639 [2024-12-06 19:13:51.624286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:06.639 [2024-12-06 19:13:51.684018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.639 [2024-12-06 19:13:51.684086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.639 [2024-12-06 19:13:51.684100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.639 [2024-12-06 19:13:51.684111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.639 [2024-12-06 19:13:51.684121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.639 [2024-12-06 19:13:51.685790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.639 [2024-12-06 19:13:51.685824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.900 [2024-12-06 19:13:51.685882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:06.900 [2024-12-06 19:13:51.685885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.900 [2024-12-06 19:13:51.785438] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:06.900 [2024-12-06 19:13:51.785627] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:06.900 [2024-12-06 19:13:51.785940] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:06.900 [2024-12-06 19:13:51.786529] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:06.900 [2024-12-06 19:13:51.786751] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:06.900 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.900 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:06.900 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:07.839 19:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:08.099 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:08.099 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:08.357 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:08.357 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:08.357 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:08.615 Malloc1 00:15:08.615 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:08.872 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:09.130 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:09.387 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:09.387 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:09.387 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:09.645 Malloc2 00:15:09.645 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:09.902 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:10.161 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:10.420 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:10.420 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 194349 00:15:10.420 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 194349 ']' 00:15:10.420 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 194349 00:15:10.420 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:10.420 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.420 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 194349 00:15:10.678 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:10.678 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:10.678 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 194349' 00:15:10.678 killing process with pid 194349 00:15:10.678 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 194349 00:15:10.678 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 194349 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:10.938 00:15:10.938 real 0m53.841s 00:15:10.938 user 3m27.890s 00:15:10.938 sys 0m3.975s 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:10.938 ************************************ 00:15:10.938 END TEST nvmf_vfio_user 00:15:10.938 ************************************ 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:10.938 ************************************ 00:15:10.938 START TEST nvmf_vfio_user_nvme_compliance 00:15:10.938 ************************************ 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:10.938 * Looking for test storage... 00:15:10.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:10.938 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:10.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.939 --rc genhtml_branch_coverage=1 00:15:10.939 --rc genhtml_function_coverage=1 00:15:10.939 --rc genhtml_legend=1 00:15:10.939 --rc geninfo_all_blocks=1 00:15:10.939 --rc geninfo_unexecuted_blocks=1 00:15:10.939 00:15:10.939 ' 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:10.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.939 --rc genhtml_branch_coverage=1 00:15:10.939 --rc genhtml_function_coverage=1 00:15:10.939 --rc genhtml_legend=1 00:15:10.939 --rc geninfo_all_blocks=1 00:15:10.939 --rc geninfo_unexecuted_blocks=1 00:15:10.939 00:15:10.939 ' 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:10.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.939 --rc genhtml_branch_coverage=1 00:15:10.939 --rc genhtml_function_coverage=1 00:15:10.939 --rc genhtml_legend=1 00:15:10.939 --rc geninfo_all_blocks=1 00:15:10.939 --rc geninfo_unexecuted_blocks=1 00:15:10.939 00:15:10.939 ' 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:10.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.939 --rc genhtml_branch_coverage=1 00:15:10.939 --rc genhtml_function_coverage=1 00:15:10.939 --rc genhtml_legend=1 00:15:10.939 --rc geninfo_all_blocks=1 00:15:10.939 --rc geninfo_unexecuted_blocks=1 00:15:10.939 00:15:10.939 ' 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:10.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=194965 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:10.939 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 194965' 00:15:10.939 Process pid: 194965 00:15:10.940 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:10.940 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 194965 00:15:10.940 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 194965 ']' 00:15:10.940 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.940 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.940 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.940 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.940 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:11.199 [2024-12-06 19:13:56.016384] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:15:11.199 [2024-12-06 19:13:56.016474] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.199 [2024-12-06 19:13:56.083433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:11.199 [2024-12-06 19:13:56.138175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.199 [2024-12-06 19:13:56.138236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.199 [2024-12-06 19:13:56.138258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.199 [2024-12-06 19:13:56.138268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.199 [2024-12-06 19:13:56.138278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.199 [2024-12-06 19:13:56.139648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.199 [2024-12-06 19:13:56.139776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.199 [2024-12-06 19:13:56.139780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.458 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.458 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:11.458 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:12.399 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:12.399 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:12.399 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:12.399 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.399 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:12.399 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:12.400 malloc0 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.400 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:12.660 00:15:12.660 00:15:12.660 CUnit - A unit testing framework for C - Version 2.1-3 00:15:12.660 http://cunit.sourceforge.net/ 00:15:12.660 00:15:12.660 00:15:12.660 Suite: nvme_compliance 00:15:12.660 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-06 19:13:57.509278] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:12.660 [2024-12-06 19:13:57.510779] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:12.660 [2024-12-06 19:13:57.510806] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:12.660 [2024-12-06 19:13:57.510819] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:12.660 [2024-12-06 19:13:57.512295] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:12.660 passed 00:15:12.660 Test: admin_identify_ctrlr_verify_fused ...[2024-12-06 19:13:57.597908] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:12.660 [2024-12-06 19:13:57.600933] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:12.660 passed 00:15:12.660 Test: admin_identify_ns ...[2024-12-06 19:13:57.686216] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:12.919 [2024-12-06 19:13:57.745754] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:12.919 [2024-12-06 19:13:57.753739] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:12.919 [2024-12-06 19:13:57.777883] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:12.919 passed 00:15:12.919 Test: admin_get_features_mandatory_features ...[2024-12-06 19:13:57.857349] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:12.919 [2024-12-06 19:13:57.860368] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:12.919 passed 00:15:12.919 Test: admin_get_features_optional_features ...[2024-12-06 19:13:57.945956] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:12.919 [2024-12-06 19:13:57.948978] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.179 passed 00:15:13.179 Test: admin_set_features_number_of_queues ...[2024-12-06 19:13:58.032171] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.179 [2024-12-06 19:13:58.136832] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.179 passed 00:15:13.179 Test: admin_get_log_page_mandatory_logs ...[2024-12-06 19:13:58.220317] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.179 [2024-12-06 19:13:58.223340] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.461 passed 00:15:13.461 Test: admin_get_log_page_with_lpo ...[2024-12-06 19:13:58.304390] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.461 [2024-12-06 19:13:58.371752] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:13.461 [2024-12-06 19:13:58.384818] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.461 passed 00:15:13.461 Test: fabric_property_get ...[2024-12-06 19:13:58.470091] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.461 [2024-12-06 19:13:58.471364] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:13.461 [2024-12-06 19:13:58.473115] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.461 passed 00:15:13.720 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-06 19:13:58.557666] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.720 [2024-12-06 19:13:58.558966] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:13.720 [2024-12-06 19:13:58.560687] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.720 passed 00:15:13.720 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-06 19:13:58.643241] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.720 [2024-12-06 19:13:58.726730] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:13.720 [2024-12-06 19:13:58.742736] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:13.720 [2024-12-06 19:13:58.747853] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.980 passed 00:15:13.980 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-06 19:13:58.831483] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.980 [2024-12-06 19:13:58.832806] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:13.981 [2024-12-06 19:13:58.834499] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:13.981 passed 00:15:13.981 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-06 19:13:58.914660] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:13.981 [2024-12-06 19:13:58.991732] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:13.981 [2024-12-06 19:13:59.015746] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:13.981 [2024-12-06 19:13:59.020860] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:14.241 passed 00:15:14.241 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-06 19:13:59.104542] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:14.241 [2024-12-06 19:13:59.105860] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:14.241 [2024-12-06 19:13:59.105899] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:14.241 [2024-12-06 19:13:59.107565] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:14.241 passed 00:15:14.241 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-06 19:13:59.188792] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:14.241 [2024-12-06 19:13:59.281734] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:14.241 [2024-12-06 19:13:59.289734] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:14.502 [2024-12-06 19:13:59.297748] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:14.502 [2024-12-06 19:13:59.305748] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:14.502 [2024-12-06 19:13:59.334848] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:14.502 passed 00:15:14.502 Test: admin_create_io_sq_verify_pc ...[2024-12-06 19:13:59.417084] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:14.502 [2024-12-06 19:13:59.433742] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:14.502 [2024-12-06 19:13:59.451447] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:14.502 passed 00:15:14.502 Test: admin_create_io_qp_max_qps ...[2024-12-06 19:13:59.537021] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:15.881 [2024-12-06 19:14:00.652741] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:16.141 [2024-12-06 19:14:01.037796] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.141 passed 00:15:16.141 Test: admin_create_io_sq_shared_cq ...[2024-12-06 19:14:01.121131] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.400 [2024-12-06 19:14:01.252732] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:16.400 [2024-12-06 19:14:01.292835] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.400 passed 00:15:16.400 00:15:16.400 Run Summary: Type Total Ran Passed Failed Inactive 00:15:16.400 suites 1 1 n/a 0 0 00:15:16.400 tests 18 18 18 0 0 00:15:16.400 asserts 360 360 360 0 n/a 00:15:16.400 00:15:16.400 Elapsed time = 1.569 seconds 00:15:16.400 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 194965 00:15:16.400 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 194965 ']' 00:15:16.400 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 194965 00:15:16.400 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:16.400 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.400 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 194965 00:15:16.400 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:16.400 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:16.400 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 194965' 00:15:16.400 killing process with pid 194965 00:15:16.400 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 194965 00:15:16.400 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 194965 00:15:16.658 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:16.658 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:16.658 00:15:16.658 real 0m5.810s 00:15:16.658 user 0m16.293s 00:15:16.658 sys 0m0.551s 00:15:16.658 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.658 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.658 ************************************ 00:15:16.658 END TEST nvmf_vfio_user_nvme_compliance 00:15:16.658 ************************************ 00:15:16.658 19:14:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:16.658 19:14:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:16.658 19:14:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.658 19:14:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:16.658 ************************************ 00:15:16.658 START TEST nvmf_vfio_user_fuzz 00:15:16.658 ************************************ 00:15:16.658 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:16.658 * Looking for test storage... 00:15:16.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.658 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:16.917 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:16.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.918 --rc genhtml_branch_coverage=1 00:15:16.918 --rc genhtml_function_coverage=1 00:15:16.918 --rc genhtml_legend=1 00:15:16.918 --rc geninfo_all_blocks=1 00:15:16.918 --rc geninfo_unexecuted_blocks=1 00:15:16.918 00:15:16.918 ' 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:16.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.918 --rc genhtml_branch_coverage=1 00:15:16.918 --rc genhtml_function_coverage=1 00:15:16.918 --rc genhtml_legend=1 00:15:16.918 --rc geninfo_all_blocks=1 00:15:16.918 --rc geninfo_unexecuted_blocks=1 00:15:16.918 00:15:16.918 ' 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:16.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.918 --rc genhtml_branch_coverage=1 00:15:16.918 --rc genhtml_function_coverage=1 00:15:16.918 --rc genhtml_legend=1 00:15:16.918 --rc geninfo_all_blocks=1 00:15:16.918 --rc geninfo_unexecuted_blocks=1 00:15:16.918 00:15:16.918 ' 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:16.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.918 --rc genhtml_branch_coverage=1 00:15:16.918 --rc genhtml_function_coverage=1 00:15:16.918 --rc genhtml_legend=1 00:15:16.918 --rc geninfo_all_blocks=1 00:15:16.918 --rc geninfo_unexecuted_blocks=1 00:15:16.918 00:15:16.918 ' 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:16.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=195690 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 195690' 00:15:16.918 Process pid: 195690 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 195690 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 195690 ']' 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:16.918 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.919 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:16.919 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:17.179 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.179 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:17.179 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:18.116 malloc0 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:18.116 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.383 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:18.383 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.383 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:18.383 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:50.453 Fuzzing completed. Shutting down the fuzz application 00:15:50.453 00:15:50.453 Dumping successful admin opcodes: 00:15:50.453 9, 10, 00:15:50.453 Dumping successful io opcodes: 00:15:50.453 0, 00:15:50.453 NS: 0x20000081ef00 I/O qp, Total commands completed: 659639, total successful commands: 2569, random_seed: 2313034240 00:15:50.453 NS: 0x20000081ef00 admin qp, Total commands completed: 132337, total successful commands: 29, random_seed: 3249788352 00:15:50.453 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:50.453 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.453 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:50.453 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.453 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 195690 00:15:50.453 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 195690 ']' 00:15:50.453 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 195690 00:15:50.453 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:50.453 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.453 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 195690 00:15:50.453 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:50.453 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:50.453 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 195690' 00:15:50.453 killing process with pid 195690 00:15:50.453 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 195690 00:15:50.453 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 195690 00:15:50.454 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:50.454 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:50.454 00:15:50.454 real 0m32.251s 00:15:50.454 user 0m30.005s 00:15:50.454 sys 0m29.565s 00:15:50.454 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.454 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:50.454 ************************************ 00:15:50.454 END TEST nvmf_vfio_user_fuzz 00:15:50.454 ************************************ 00:15:50.454 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:50.454 19:14:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:50.454 19:14:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.454 19:14:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:50.454 ************************************ 00:15:50.454 START TEST nvmf_auth_target 00:15:50.454 ************************************ 00:15:50.454 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:50.454 * Looking for test storage... 00:15:50.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.454 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:50.454 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:50.454 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:50.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.454 --rc genhtml_branch_coverage=1 00:15:50.454 --rc genhtml_function_coverage=1 00:15:50.454 --rc genhtml_legend=1 00:15:50.454 --rc geninfo_all_blocks=1 00:15:50.454 --rc geninfo_unexecuted_blocks=1 00:15:50.454 00:15:50.454 ' 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:50.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.454 --rc genhtml_branch_coverage=1 00:15:50.454 --rc genhtml_function_coverage=1 00:15:50.454 --rc genhtml_legend=1 00:15:50.454 --rc geninfo_all_blocks=1 00:15:50.454 --rc geninfo_unexecuted_blocks=1 00:15:50.454 00:15:50.454 ' 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:50.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.454 --rc genhtml_branch_coverage=1 00:15:50.454 --rc genhtml_function_coverage=1 00:15:50.454 --rc genhtml_legend=1 00:15:50.454 --rc geninfo_all_blocks=1 00:15:50.454 --rc geninfo_unexecuted_blocks=1 00:15:50.454 00:15:50.454 ' 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:50.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.454 --rc genhtml_branch_coverage=1 00:15:50.454 --rc genhtml_function_coverage=1 00:15:50.454 --rc genhtml_legend=1 00:15:50.454 --rc geninfo_all_blocks=1 00:15:50.454 --rc geninfo_unexecuted_blocks=1 00:15:50.454 00:15:50.454 ' 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.454 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:50.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:50.455 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:51.394 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:51.394 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:51.394 Found net devices under 0000:84:00.0: cvl_0_0 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:51.394 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:51.395 Found net devices under 0000:84:00.1: cvl_0_1 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:51.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:15:51.395 00:15:51.395 --- 10.0.0.2 ping statistics --- 00:15:51.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.395 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:51.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:15:51.395 00:15:51.395 --- 10.0.0.1 ping statistics --- 00:15:51.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.395 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=201158 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 201158 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 201158 ']' 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:51.395 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=201189 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0afbd8d58110f6e21923a70119a9a938bfda263253634f0a 00:15:51.654 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4aD 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0afbd8d58110f6e21923a70119a9a938bfda263253634f0a 0 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0afbd8d58110f6e21923a70119a9a938bfda263253634f0a 0 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0afbd8d58110f6e21923a70119a9a938bfda263253634f0a 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4aD 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4aD 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.4aD 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9804dc4ca1701256b42fdc4d037822f1e41a6c4649a2be1189606660ce087d43 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dnP 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9804dc4ca1701256b42fdc4d037822f1e41a6c4649a2be1189606660ce087d43 3 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9804dc4ca1701256b42fdc4d037822f1e41a6c4649a2be1189606660ce087d43 3 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9804dc4ca1701256b42fdc4d037822f1e41a6c4649a2be1189606660ce087d43 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dnP 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dnP 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.dnP 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:51.655 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5e97ade70818b084aa0004276168cfd9 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Avv 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5e97ade70818b084aa0004276168cfd9 1 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5e97ade70818b084aa0004276168cfd9 1 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5e97ade70818b084aa0004276168cfd9 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Avv 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Avv 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Avv 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d6be9d026a2b078e1309865ef281f0c972dbbebe0778ac13 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1fI 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d6be9d026a2b078e1309865ef281f0c972dbbebe0778ac13 2 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d6be9d026a2b078e1309865ef281f0c972dbbebe0778ac13 2 00:15:51.914 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d6be9d026a2b078e1309865ef281f0c972dbbebe0778ac13 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1fI 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1fI 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.1fI 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d28bde051c924d1e3ebeb5195c7000752fe3fc2dcaa6ca97 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.kLM 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d28bde051c924d1e3ebeb5195c7000752fe3fc2dcaa6ca97 2 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d28bde051c924d1e3ebeb5195c7000752fe3fc2dcaa6ca97 2 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d28bde051c924d1e3ebeb5195c7000752fe3fc2dcaa6ca97 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.kLM 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.kLM 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.kLM 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cb10beb767b12973e45d38346ce59968 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.xh9 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cb10beb767b12973e45d38346ce59968 1 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cb10beb767b12973e45d38346ce59968 1 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cb10beb767b12973e45d38346ce59968 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.xh9 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.xh9 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.xh9 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ba202a5b12f9b3a63980719279149426d89fd7cac4de7aa89d238a4828efa82f 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.kwy 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ba202a5b12f9b3a63980719279149426d89fd7cac4de7aa89d238a4828efa82f 3 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ba202a5b12f9b3a63980719279149426d89fd7cac4de7aa89d238a4828efa82f 3 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ba202a5b12f9b3a63980719279149426d89fd7cac4de7aa89d238a4828efa82f 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.kwy 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.kwy 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.kwy 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 201158 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 201158 ']' 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:51.915 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 201189 /var/tmp/host.sock 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 201189 ']' 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:52.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4aD 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.4aD 00:15:52.486 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.4aD 00:15:53.053 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.dnP ]] 00:15:53.053 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dnP 00:15:53.053 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.053 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.053 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.053 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dnP 00:15:53.053 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dnP 00:15:53.053 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:53.053 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Avv 00:15:53.053 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.053 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.053 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.053 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Avv 00:15:53.053 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Avv 00:15:53.311 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.1fI ]] 00:15:53.311 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1fI 00:15:53.311 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.311 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.311 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.311 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1fI 00:15:53.311 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1fI 00:15:53.876 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:53.876 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.kLM 00:15:53.876 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.876 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.876 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.876 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.kLM 00:15:53.876 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.kLM 00:15:53.876 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.xh9 ]] 00:15:53.876 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xh9 00:15:53.876 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.876 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.876 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.876 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xh9 00:15:53.876 19:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xh9 00:15:54.503 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:54.503 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.kwy 00:15:54.503 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.503 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.503 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.503 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.kwy 00:15:54.503 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.kwy 00:15:54.503 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:54.503 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:54.503 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:54.503 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.503 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:54.503 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:54.760 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:54.760 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.760 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:54.760 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:54.760 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:54.760 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.760 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.760 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.760 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.760 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.761 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.761 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.761 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.018 00:15:55.277 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.277 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.277 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.534 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.534 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.534 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.534 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.535 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.535 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.535 { 00:15:55.535 "cntlid": 1, 00:15:55.535 "qid": 0, 00:15:55.535 "state": "enabled", 00:15:55.535 "thread": "nvmf_tgt_poll_group_000", 00:15:55.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:15:55.535 "listen_address": { 00:15:55.535 "trtype": "TCP", 00:15:55.535 "adrfam": "IPv4", 00:15:55.535 "traddr": "10.0.0.2", 00:15:55.535 "trsvcid": "4420" 00:15:55.535 }, 00:15:55.535 "peer_address": { 00:15:55.535 "trtype": "TCP", 00:15:55.535 "adrfam": "IPv4", 00:15:55.535 "traddr": "10.0.0.1", 00:15:55.535 "trsvcid": "58742" 00:15:55.535 }, 00:15:55.535 "auth": { 00:15:55.535 "state": "completed", 00:15:55.535 "digest": "sha256", 00:15:55.535 "dhgroup": "null" 00:15:55.535 } 00:15:55.535 } 00:15:55.535 ]' 00:15:55.535 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.535 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.535 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.535 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:55.535 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.535 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.535 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.535 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.792 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:15:55.792 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.056 00:16:01.056 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.057 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.057 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.316 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.316 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.316 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.316 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.316 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.316 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.316 { 00:16:01.316 "cntlid": 3, 00:16:01.316 "qid": 0, 00:16:01.316 "state": "enabled", 00:16:01.316 "thread": "nvmf_tgt_poll_group_000", 00:16:01.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:01.316 "listen_address": { 00:16:01.316 "trtype": "TCP", 00:16:01.316 "adrfam": "IPv4", 00:16:01.316 "traddr": "10.0.0.2", 00:16:01.316 "trsvcid": "4420" 00:16:01.316 }, 00:16:01.316 "peer_address": { 00:16:01.316 "trtype": "TCP", 00:16:01.316 "adrfam": "IPv4", 00:16:01.316 "traddr": "10.0.0.1", 00:16:01.316 "trsvcid": "58774" 00:16:01.316 }, 00:16:01.316 "auth": { 00:16:01.316 "state": "completed", 00:16:01.316 "digest": "sha256", 00:16:01.316 "dhgroup": "null" 00:16:01.316 } 00:16:01.316 } 00:16:01.316 ]' 00:16:01.316 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.316 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.316 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.316 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:01.316 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.316 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.316 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.316 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.575 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:16:01.575 19:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:16:02.514 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.514 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:02.514 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.514 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.514 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.514 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.514 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:02.514 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:02.771 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:02.771 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.771 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:02.771 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:02.771 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:02.771 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.771 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.771 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.771 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.771 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.771 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.771 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.771 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.030 00:16:03.290 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.290 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.290 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.548 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.548 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.548 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.548 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.548 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.548 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.548 { 00:16:03.548 "cntlid": 5, 00:16:03.548 "qid": 0, 00:16:03.548 "state": "enabled", 00:16:03.548 "thread": "nvmf_tgt_poll_group_000", 00:16:03.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:03.548 "listen_address": { 00:16:03.548 "trtype": "TCP", 00:16:03.548 "adrfam": "IPv4", 00:16:03.548 "traddr": "10.0.0.2", 00:16:03.548 "trsvcid": "4420" 00:16:03.548 }, 00:16:03.548 "peer_address": { 00:16:03.548 "trtype": "TCP", 00:16:03.548 "adrfam": "IPv4", 00:16:03.548 "traddr": "10.0.0.1", 00:16:03.548 "trsvcid": "58796" 00:16:03.548 }, 00:16:03.548 "auth": { 00:16:03.548 "state": "completed", 00:16:03.548 "digest": "sha256", 00:16:03.548 "dhgroup": "null" 00:16:03.548 } 00:16:03.549 } 00:16:03.549 ]' 00:16:03.549 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.549 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.549 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.549 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:03.549 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.549 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.549 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.549 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.807 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:16:03.807 19:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:16:04.742 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.742 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:04.742 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.742 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.742 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.742 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.742 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:04.742 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:05.001 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:05.001 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.001 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:05.001 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:05.001 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:05.001 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.001 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:05.001 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.001 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.001 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.001 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:05.001 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.001 19:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.259 00:16:05.259 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.259 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.259 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.517 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.517 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.517 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.517 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.517 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.517 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.517 { 00:16:05.517 "cntlid": 7, 00:16:05.517 "qid": 0, 00:16:05.517 "state": "enabled", 00:16:05.517 "thread": "nvmf_tgt_poll_group_000", 00:16:05.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:05.517 "listen_address": { 00:16:05.517 "trtype": "TCP", 00:16:05.517 "adrfam": "IPv4", 00:16:05.517 "traddr": "10.0.0.2", 00:16:05.517 "trsvcid": "4420" 00:16:05.517 }, 00:16:05.517 "peer_address": { 00:16:05.517 "trtype": "TCP", 00:16:05.517 "adrfam": "IPv4", 00:16:05.517 "traddr": "10.0.0.1", 00:16:05.517 "trsvcid": "52532" 00:16:05.517 }, 00:16:05.517 "auth": { 00:16:05.517 "state": "completed", 00:16:05.517 "digest": "sha256", 00:16:05.517 "dhgroup": "null" 00:16:05.517 } 00:16:05.517 } 00:16:05.517 ]' 00:16:05.517 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.517 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.517 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.775 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:05.775 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.775 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.775 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.775 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.033 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:16:06.033 19:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:16:06.967 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.967 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:06.967 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.967 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.967 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.967 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.967 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.967 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:06.967 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:07.226 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:07.226 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.226 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:07.226 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:07.226 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:07.226 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.226 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.226 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.226 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.226 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.226 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.226 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.226 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.485 00:16:07.485 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.485 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.485 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.761 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.761 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.761 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.761 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.761 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.761 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.761 { 00:16:07.761 "cntlid": 9, 00:16:07.761 "qid": 0, 00:16:07.761 "state": "enabled", 00:16:07.761 "thread": "nvmf_tgt_poll_group_000", 00:16:07.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:07.761 "listen_address": { 00:16:07.761 "trtype": "TCP", 00:16:07.761 "adrfam": "IPv4", 00:16:07.761 "traddr": "10.0.0.2", 00:16:07.761 "trsvcid": "4420" 00:16:07.761 }, 00:16:07.761 "peer_address": { 00:16:07.761 "trtype": "TCP", 00:16:07.761 "adrfam": "IPv4", 00:16:07.761 "traddr": "10.0.0.1", 00:16:07.761 "trsvcid": "52560" 00:16:07.761 }, 00:16:07.761 "auth": { 00:16:07.761 "state": "completed", 00:16:07.761 "digest": "sha256", 00:16:07.761 "dhgroup": "ffdhe2048" 00:16:07.761 } 00:16:07.761 } 00:16:07.761 ]' 00:16:07.761 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.761 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.761 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.761 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:07.761 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.018 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.018 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.018 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.277 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:16:08.277 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:16:09.215 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.215 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:09.215 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.215 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.215 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.215 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.215 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.215 19:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.215 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:09.215 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.215 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:09.215 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:09.215 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:09.215 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.215 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.215 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.215 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.215 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.215 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.215 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.215 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.835 00:16:09.835 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.835 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.835 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.835 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.835 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.835 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.835 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.835 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.093 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.093 { 00:16:10.093 "cntlid": 11, 00:16:10.093 "qid": 0, 00:16:10.093 "state": "enabled", 00:16:10.093 "thread": "nvmf_tgt_poll_group_000", 00:16:10.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:10.093 "listen_address": { 00:16:10.093 "trtype": "TCP", 00:16:10.093 "adrfam": "IPv4", 00:16:10.093 "traddr": "10.0.0.2", 00:16:10.093 "trsvcid": "4420" 00:16:10.093 }, 00:16:10.093 "peer_address": { 00:16:10.093 "trtype": "TCP", 00:16:10.093 "adrfam": "IPv4", 00:16:10.093 "traddr": "10.0.0.1", 00:16:10.093 "trsvcid": "52582" 00:16:10.093 }, 00:16:10.093 "auth": { 00:16:10.093 "state": "completed", 00:16:10.093 "digest": "sha256", 00:16:10.093 "dhgroup": "ffdhe2048" 00:16:10.093 } 00:16:10.093 } 00:16:10.093 ]' 00:16:10.093 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.093 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.093 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.093 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:10.093 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.093 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.093 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.094 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.350 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:16:10.350 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:16:11.281 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.281 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:11.281 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.281 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.281 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.282 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.282 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:11.282 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:11.540 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:11.540 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.540 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:11.540 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:11.540 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:11.540 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.540 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.540 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.540 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.540 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.540 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.540 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.540 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.797 00:16:11.797 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.797 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.797 19:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.054 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.054 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.054 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.054 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.054 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.054 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.054 { 00:16:12.054 "cntlid": 13, 00:16:12.054 "qid": 0, 00:16:12.054 "state": "enabled", 00:16:12.054 "thread": "nvmf_tgt_poll_group_000", 00:16:12.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:12.054 "listen_address": { 00:16:12.054 "trtype": "TCP", 00:16:12.054 "adrfam": "IPv4", 00:16:12.054 "traddr": "10.0.0.2", 00:16:12.054 "trsvcid": "4420" 00:16:12.054 }, 00:16:12.054 "peer_address": { 00:16:12.054 "trtype": "TCP", 00:16:12.054 "adrfam": "IPv4", 00:16:12.054 "traddr": "10.0.0.1", 00:16:12.054 "trsvcid": "52602" 00:16:12.054 }, 00:16:12.054 "auth": { 00:16:12.054 "state": "completed", 00:16:12.054 "digest": "sha256", 00:16:12.054 "dhgroup": "ffdhe2048" 00:16:12.054 } 00:16:12.054 } 00:16:12.054 ]' 00:16:12.054 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.312 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.312 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.312 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:12.312 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.312 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.312 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.312 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.569 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:16:12.569 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:16:13.503 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.503 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:13.503 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.503 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.503 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.503 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.503 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:13.503 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:13.761 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:13.761 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.761 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:13.761 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:13.761 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.761 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.761 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:13.761 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.761 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.761 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.761 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.761 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.761 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.019 00:16:14.019 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.019 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.019 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.277 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.277 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.277 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.277 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.277 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.277 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.277 { 00:16:14.277 "cntlid": 15, 00:16:14.277 "qid": 0, 00:16:14.277 "state": "enabled", 00:16:14.277 "thread": "nvmf_tgt_poll_group_000", 00:16:14.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:14.277 "listen_address": { 00:16:14.277 "trtype": "TCP", 00:16:14.277 "adrfam": "IPv4", 00:16:14.277 "traddr": "10.0.0.2", 00:16:14.277 "trsvcid": "4420" 00:16:14.277 }, 00:16:14.277 "peer_address": { 00:16:14.277 "trtype": "TCP", 00:16:14.277 "adrfam": "IPv4", 00:16:14.277 "traddr": "10.0.0.1", 00:16:14.277 "trsvcid": "52650" 00:16:14.277 }, 00:16:14.277 "auth": { 00:16:14.277 "state": "completed", 00:16:14.277 "digest": "sha256", 00:16:14.277 "dhgroup": "ffdhe2048" 00:16:14.277 } 00:16:14.277 } 00:16:14.277 ]' 00:16:14.277 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.535 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.535 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.535 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:14.535 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.535 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.535 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.535 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.793 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:16:14.793 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:16:15.737 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.737 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:15.737 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.737 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.737 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.737 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.737 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.737 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:15.737 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:15.995 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:15.995 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.995 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.995 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:15.995 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:15.995 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.995 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.995 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.995 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.995 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.995 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.995 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.996 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.253 00:16:16.253 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.253 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.253 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.512 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.769 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.769 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.769 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.769 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.769 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.769 { 00:16:16.769 "cntlid": 17, 00:16:16.769 "qid": 0, 00:16:16.769 "state": "enabled", 00:16:16.769 "thread": "nvmf_tgt_poll_group_000", 00:16:16.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:16.769 "listen_address": { 00:16:16.769 "trtype": "TCP", 00:16:16.769 "adrfam": "IPv4", 00:16:16.769 "traddr": "10.0.0.2", 00:16:16.769 "trsvcid": "4420" 00:16:16.769 }, 00:16:16.769 "peer_address": { 00:16:16.769 "trtype": "TCP", 00:16:16.769 "adrfam": "IPv4", 00:16:16.769 "traddr": "10.0.0.1", 00:16:16.769 "trsvcid": "41824" 00:16:16.769 }, 00:16:16.769 "auth": { 00:16:16.769 "state": "completed", 00:16:16.769 "digest": "sha256", 00:16:16.769 "dhgroup": "ffdhe3072" 00:16:16.769 } 00:16:16.769 } 00:16:16.769 ]' 00:16:16.769 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.769 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.769 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.769 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:16.769 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.769 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.769 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.769 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.027 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:16:17.027 19:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:16:17.966 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.966 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:17.966 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.966 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.966 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.966 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.966 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:17.966 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:18.224 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:18.224 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.224 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.224 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:18.224 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:18.224 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.224 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.224 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.224 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.224 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.224 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.224 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.224 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.482 00:16:18.482 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.482 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.482 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.048 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.048 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.048 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.048 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.048 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.048 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.048 { 00:16:19.048 "cntlid": 19, 00:16:19.048 "qid": 0, 00:16:19.048 "state": "enabled", 00:16:19.048 "thread": "nvmf_tgt_poll_group_000", 00:16:19.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:19.048 "listen_address": { 00:16:19.048 "trtype": "TCP", 00:16:19.048 "adrfam": "IPv4", 00:16:19.049 "traddr": "10.0.0.2", 00:16:19.049 "trsvcid": "4420" 00:16:19.049 }, 00:16:19.049 "peer_address": { 00:16:19.049 "trtype": "TCP", 00:16:19.049 "adrfam": "IPv4", 00:16:19.049 "traddr": "10.0.0.1", 00:16:19.049 "trsvcid": "41846" 00:16:19.049 }, 00:16:19.049 "auth": { 00:16:19.049 "state": "completed", 00:16:19.049 "digest": "sha256", 00:16:19.049 "dhgroup": "ffdhe3072" 00:16:19.049 } 00:16:19.049 } 00:16:19.049 ]' 00:16:19.049 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.049 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.049 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.049 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:19.049 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.049 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.049 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.049 19:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.306 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:16:19.306 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:16:20.244 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.244 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:20.244 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.244 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.244 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.244 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.244 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:20.244 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:20.501 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:20.501 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.501 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.501 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:20.502 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:20.502 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.502 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.502 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.502 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.502 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.502 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.502 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.502 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.759 00:16:21.017 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.017 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.017 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.275 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.275 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.275 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.275 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.275 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.275 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.275 { 00:16:21.275 "cntlid": 21, 00:16:21.275 "qid": 0, 00:16:21.275 "state": "enabled", 00:16:21.275 "thread": "nvmf_tgt_poll_group_000", 00:16:21.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:21.275 "listen_address": { 00:16:21.275 "trtype": "TCP", 00:16:21.275 "adrfam": "IPv4", 00:16:21.275 "traddr": "10.0.0.2", 00:16:21.275 "trsvcid": "4420" 00:16:21.275 }, 00:16:21.275 "peer_address": { 00:16:21.275 "trtype": "TCP", 00:16:21.275 "adrfam": "IPv4", 00:16:21.275 "traddr": "10.0.0.1", 00:16:21.275 "trsvcid": "41884" 00:16:21.275 }, 00:16:21.275 "auth": { 00:16:21.275 "state": "completed", 00:16:21.275 "digest": "sha256", 00:16:21.275 "dhgroup": "ffdhe3072" 00:16:21.275 } 00:16:21.275 } 00:16:21.275 ]' 00:16:21.275 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.275 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.275 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.275 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:21.275 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.275 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.275 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.275 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.533 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:16:21.533 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:16:22.470 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.470 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:22.470 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.470 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.470 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.470 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.470 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:22.470 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:22.728 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:22.728 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.728 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.728 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:22.728 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:22.728 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.728 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:22.728 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.728 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.728 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.728 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:22.728 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.728 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.296 00:16:23.296 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.296 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.296 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.555 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.555 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.555 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.555 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.555 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.555 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.555 { 00:16:23.555 "cntlid": 23, 00:16:23.555 "qid": 0, 00:16:23.555 "state": "enabled", 00:16:23.555 "thread": "nvmf_tgt_poll_group_000", 00:16:23.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:23.555 "listen_address": { 00:16:23.555 "trtype": "TCP", 00:16:23.555 "adrfam": "IPv4", 00:16:23.555 "traddr": "10.0.0.2", 00:16:23.555 "trsvcid": "4420" 00:16:23.555 }, 00:16:23.555 "peer_address": { 00:16:23.555 "trtype": "TCP", 00:16:23.555 "adrfam": "IPv4", 00:16:23.555 "traddr": "10.0.0.1", 00:16:23.555 "trsvcid": "41920" 00:16:23.555 }, 00:16:23.555 "auth": { 00:16:23.555 "state": "completed", 00:16:23.555 "digest": "sha256", 00:16:23.555 "dhgroup": "ffdhe3072" 00:16:23.555 } 00:16:23.555 } 00:16:23.555 ]' 00:16:23.555 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.555 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.555 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.555 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:23.555 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.555 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.555 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.555 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.814 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:16:23.814 19:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:16:24.750 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.750 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:24.750 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.750 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.750 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.750 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.750 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.750 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:24.750 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:25.007 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:25.007 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.007 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.007 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:25.007 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:25.007 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.007 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.007 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.007 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.007 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.007 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.007 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.007 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.263 00:16:25.520 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.520 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.520 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.778 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.778 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.778 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.778 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.778 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.778 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.778 { 00:16:25.778 "cntlid": 25, 00:16:25.778 "qid": 0, 00:16:25.778 "state": "enabled", 00:16:25.778 "thread": "nvmf_tgt_poll_group_000", 00:16:25.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:25.778 "listen_address": { 00:16:25.778 "trtype": "TCP", 00:16:25.778 "adrfam": "IPv4", 00:16:25.778 "traddr": "10.0.0.2", 00:16:25.778 "trsvcid": "4420" 00:16:25.778 }, 00:16:25.778 "peer_address": { 00:16:25.778 "trtype": "TCP", 00:16:25.778 "adrfam": "IPv4", 00:16:25.778 "traddr": "10.0.0.1", 00:16:25.778 "trsvcid": "43146" 00:16:25.778 }, 00:16:25.778 "auth": { 00:16:25.778 "state": "completed", 00:16:25.778 "digest": "sha256", 00:16:25.778 "dhgroup": "ffdhe4096" 00:16:25.778 } 00:16:25.778 } 00:16:25.778 ]' 00:16:25.778 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.778 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.778 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.778 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:25.778 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.778 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.778 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.778 19:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.036 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:16:26.036 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:16:26.971 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.971 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:26.971 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.971 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.971 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.971 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.971 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:26.971 19:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:27.228 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:27.228 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.228 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.228 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:27.228 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:27.228 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.228 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.228 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.228 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.228 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.228 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.229 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.229 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.485 00:16:27.743 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.743 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.743 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.000 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.000 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.000 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.000 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.000 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.000 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.000 { 00:16:28.000 "cntlid": 27, 00:16:28.000 "qid": 0, 00:16:28.000 "state": "enabled", 00:16:28.000 "thread": "nvmf_tgt_poll_group_000", 00:16:28.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:28.000 "listen_address": { 00:16:28.000 "trtype": "TCP", 00:16:28.000 "adrfam": "IPv4", 00:16:28.000 "traddr": "10.0.0.2", 00:16:28.000 "trsvcid": "4420" 00:16:28.000 }, 00:16:28.000 "peer_address": { 00:16:28.000 "trtype": "TCP", 00:16:28.000 "adrfam": "IPv4", 00:16:28.000 "traddr": "10.0.0.1", 00:16:28.000 "trsvcid": "43166" 00:16:28.000 }, 00:16:28.000 "auth": { 00:16:28.000 "state": "completed", 00:16:28.000 "digest": "sha256", 00:16:28.000 "dhgroup": "ffdhe4096" 00:16:28.000 } 00:16:28.000 } 00:16:28.000 ]' 00:16:28.000 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.000 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.000 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.000 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:28.000 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.000 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.000 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.000 19:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.257 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:16:28.257 19:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:16:29.192 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.192 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:29.192 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.192 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.192 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.192 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.192 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.192 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.451 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:29.451 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.451 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.451 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:29.451 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.451 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.451 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.451 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.451 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.451 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.451 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.451 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.451 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.717 00:16:29.717 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.717 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.975 19:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.239 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.239 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.239 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.239 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.239 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.239 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.239 { 00:16:30.239 "cntlid": 29, 00:16:30.239 "qid": 0, 00:16:30.239 "state": "enabled", 00:16:30.239 "thread": "nvmf_tgt_poll_group_000", 00:16:30.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:30.239 "listen_address": { 00:16:30.239 "trtype": "TCP", 00:16:30.239 "adrfam": "IPv4", 00:16:30.239 "traddr": "10.0.0.2", 00:16:30.239 "trsvcid": "4420" 00:16:30.239 }, 00:16:30.239 "peer_address": { 00:16:30.239 "trtype": "TCP", 00:16:30.239 "adrfam": "IPv4", 00:16:30.239 "traddr": "10.0.0.1", 00:16:30.239 "trsvcid": "43202" 00:16:30.239 }, 00:16:30.239 "auth": { 00:16:30.239 "state": "completed", 00:16:30.239 "digest": "sha256", 00:16:30.239 "dhgroup": "ffdhe4096" 00:16:30.239 } 00:16:30.239 } 00:16:30.239 ]' 00:16:30.239 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.239 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.239 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.239 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.239 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.239 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.239 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.239 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.496 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:16:30.496 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:16:31.434 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.434 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:31.434 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.434 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.434 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.434 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.434 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:31.434 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:31.693 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:31.693 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.693 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.693 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:31.693 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.693 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.693 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:31.693 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.693 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.693 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.693 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.693 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.693 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.262 00:16:32.262 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.262 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.262 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.262 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.262 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.262 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.262 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.262 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.262 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.262 { 00:16:32.262 "cntlid": 31, 00:16:32.262 "qid": 0, 00:16:32.262 "state": "enabled", 00:16:32.262 "thread": "nvmf_tgt_poll_group_000", 00:16:32.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:32.262 "listen_address": { 00:16:32.262 "trtype": "TCP", 00:16:32.262 "adrfam": "IPv4", 00:16:32.262 "traddr": "10.0.0.2", 00:16:32.262 "trsvcid": "4420" 00:16:32.262 }, 00:16:32.262 "peer_address": { 00:16:32.262 "trtype": "TCP", 00:16:32.262 "adrfam": "IPv4", 00:16:32.262 "traddr": "10.0.0.1", 00:16:32.262 "trsvcid": "43224" 00:16:32.262 }, 00:16:32.262 "auth": { 00:16:32.262 "state": "completed", 00:16:32.262 "digest": "sha256", 00:16:32.262 "dhgroup": "ffdhe4096" 00:16:32.262 } 00:16:32.262 } 00:16:32.262 ]' 00:16:32.262 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.520 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.520 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.520 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:32.520 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.520 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.520 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.520 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.778 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:16:32.778 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:16:33.713 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.713 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:33.713 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.713 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.713 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.713 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.713 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.713 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:33.713 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:33.971 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:33.971 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.971 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.971 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:33.971 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:33.971 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.971 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.971 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.971 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.971 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.971 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.971 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.971 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.539 00:16:34.539 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.539 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.539 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.808 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.808 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.808 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.808 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.808 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.808 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.808 { 00:16:34.808 "cntlid": 33, 00:16:34.808 "qid": 0, 00:16:34.808 "state": "enabled", 00:16:34.808 "thread": "nvmf_tgt_poll_group_000", 00:16:34.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:34.808 "listen_address": { 00:16:34.808 "trtype": "TCP", 00:16:34.808 "adrfam": "IPv4", 00:16:34.808 "traddr": "10.0.0.2", 00:16:34.808 "trsvcid": "4420" 00:16:34.808 }, 00:16:34.808 "peer_address": { 00:16:34.808 "trtype": "TCP", 00:16:34.808 "adrfam": "IPv4", 00:16:34.808 "traddr": "10.0.0.1", 00:16:34.808 "trsvcid": "43254" 00:16:34.808 }, 00:16:34.808 "auth": { 00:16:34.808 "state": "completed", 00:16:34.808 "digest": "sha256", 00:16:34.808 "dhgroup": "ffdhe6144" 00:16:34.808 } 00:16:34.808 } 00:16:34.808 ]' 00:16:34.808 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.808 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.808 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.808 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:34.808 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.808 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.808 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.808 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.375 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:16:35.375 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:16:36.310 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.310 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:36.310 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.310 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.310 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.310 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.310 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:36.310 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:36.310 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:36.311 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.311 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.311 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:36.311 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:36.311 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.311 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.311 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.311 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.311 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.311 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.311 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.311 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.878 00:16:36.878 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.878 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.878 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.136 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.136 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.136 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.137 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.137 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.137 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.137 { 00:16:37.137 "cntlid": 35, 00:16:37.137 "qid": 0, 00:16:37.137 "state": "enabled", 00:16:37.137 "thread": "nvmf_tgt_poll_group_000", 00:16:37.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:37.137 "listen_address": { 00:16:37.137 "trtype": "TCP", 00:16:37.137 "adrfam": "IPv4", 00:16:37.137 "traddr": "10.0.0.2", 00:16:37.137 "trsvcid": "4420" 00:16:37.137 }, 00:16:37.137 "peer_address": { 00:16:37.137 "trtype": "TCP", 00:16:37.137 "adrfam": "IPv4", 00:16:37.137 "traddr": "10.0.0.1", 00:16:37.137 "trsvcid": "49242" 00:16:37.137 }, 00:16:37.137 "auth": { 00:16:37.137 "state": "completed", 00:16:37.137 "digest": "sha256", 00:16:37.137 "dhgroup": "ffdhe6144" 00:16:37.137 } 00:16:37.137 } 00:16:37.137 ]' 00:16:37.137 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.137 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.137 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.137 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:37.137 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.395 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.395 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.395 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.653 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:16:37.653 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:16:38.602 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.602 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:38.602 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.602 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.602 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.602 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.602 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:38.602 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:38.860 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:38.860 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.860 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.860 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:38.860 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.860 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.860 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.860 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.860 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.860 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.860 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.860 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.860 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.428 00:16:39.428 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.428 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.428 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.686 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.686 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.686 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.686 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.686 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.686 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.686 { 00:16:39.686 "cntlid": 37, 00:16:39.686 "qid": 0, 00:16:39.686 "state": "enabled", 00:16:39.686 "thread": "nvmf_tgt_poll_group_000", 00:16:39.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:39.686 "listen_address": { 00:16:39.686 "trtype": "TCP", 00:16:39.686 "adrfam": "IPv4", 00:16:39.686 "traddr": "10.0.0.2", 00:16:39.686 "trsvcid": "4420" 00:16:39.686 }, 00:16:39.686 "peer_address": { 00:16:39.686 "trtype": "TCP", 00:16:39.686 "adrfam": "IPv4", 00:16:39.687 "traddr": "10.0.0.1", 00:16:39.687 "trsvcid": "49276" 00:16:39.687 }, 00:16:39.687 "auth": { 00:16:39.687 "state": "completed", 00:16:39.687 "digest": "sha256", 00:16:39.687 "dhgroup": "ffdhe6144" 00:16:39.687 } 00:16:39.687 } 00:16:39.687 ]' 00:16:39.687 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.687 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.687 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.687 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:39.687 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.687 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.687 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.687 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.944 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:16:39.944 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:16:40.898 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.898 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:40.898 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.898 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.898 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.898 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.898 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:40.898 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.156 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:41.156 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.156 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.156 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:41.156 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:41.156 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.156 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:41.156 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.156 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.156 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.156 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:41.156 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.156 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.723 00:16:41.723 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.723 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.723 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.981 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.981 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.981 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.981 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.981 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.981 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.981 { 00:16:41.981 "cntlid": 39, 00:16:41.981 "qid": 0, 00:16:41.981 "state": "enabled", 00:16:41.981 "thread": "nvmf_tgt_poll_group_000", 00:16:41.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:41.981 "listen_address": { 00:16:41.981 "trtype": "TCP", 00:16:41.981 "adrfam": "IPv4", 00:16:41.981 "traddr": "10.0.0.2", 00:16:41.981 "trsvcid": "4420" 00:16:41.981 }, 00:16:41.981 "peer_address": { 00:16:41.981 "trtype": "TCP", 00:16:41.981 "adrfam": "IPv4", 00:16:41.981 "traddr": "10.0.0.1", 00:16:41.981 "trsvcid": "49300" 00:16:41.981 }, 00:16:41.981 "auth": { 00:16:41.981 "state": "completed", 00:16:41.981 "digest": "sha256", 00:16:41.981 "dhgroup": "ffdhe6144" 00:16:41.981 } 00:16:41.981 } 00:16:41.981 ]' 00:16:41.981 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.981 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.981 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.981 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.981 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.981 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.981 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.981 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.240 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:16:42.240 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:16:43.174 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.174 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:43.174 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.174 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.174 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.174 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.174 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.174 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:43.174 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:43.431 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:43.431 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.431 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.431 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:43.431 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.431 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.432 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.432 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.432 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.432 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.432 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.432 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.432 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.365 00:16:44.365 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.365 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.365 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.622 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.622 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.622 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.622 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.622 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.622 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.622 { 00:16:44.622 "cntlid": 41, 00:16:44.622 "qid": 0, 00:16:44.622 "state": "enabled", 00:16:44.622 "thread": "nvmf_tgt_poll_group_000", 00:16:44.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:44.622 "listen_address": { 00:16:44.622 "trtype": "TCP", 00:16:44.622 "adrfam": "IPv4", 00:16:44.622 "traddr": "10.0.0.2", 00:16:44.622 "trsvcid": "4420" 00:16:44.622 }, 00:16:44.622 "peer_address": { 00:16:44.622 "trtype": "TCP", 00:16:44.622 "adrfam": "IPv4", 00:16:44.622 "traddr": "10.0.0.1", 00:16:44.622 "trsvcid": "49340" 00:16:44.622 }, 00:16:44.622 "auth": { 00:16:44.622 "state": "completed", 00:16:44.622 "digest": "sha256", 00:16:44.622 "dhgroup": "ffdhe8192" 00:16:44.622 } 00:16:44.622 } 00:16:44.622 ]' 00:16:44.622 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.622 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.622 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.622 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:44.622 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.623 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.623 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.623 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.879 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:16:44.879 19:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:16:45.828 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.828 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:45.828 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.828 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.828 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.828 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.828 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:45.828 19:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:46.396 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:46.396 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.396 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.396 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:46.396 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:46.396 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.396 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.396 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.396 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.396 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.396 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.396 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.396 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.964 00:16:46.964 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.964 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.964 19:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.222 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.222 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.222 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.222 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.222 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.222 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.222 { 00:16:47.222 "cntlid": 43, 00:16:47.222 "qid": 0, 00:16:47.222 "state": "enabled", 00:16:47.222 "thread": "nvmf_tgt_poll_group_000", 00:16:47.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:47.222 "listen_address": { 00:16:47.222 "trtype": "TCP", 00:16:47.222 "adrfam": "IPv4", 00:16:47.222 "traddr": "10.0.0.2", 00:16:47.223 "trsvcid": "4420" 00:16:47.223 }, 00:16:47.223 "peer_address": { 00:16:47.223 "trtype": "TCP", 00:16:47.223 "adrfam": "IPv4", 00:16:47.223 "traddr": "10.0.0.1", 00:16:47.223 "trsvcid": "53382" 00:16:47.223 }, 00:16:47.223 "auth": { 00:16:47.223 "state": "completed", 00:16:47.223 "digest": "sha256", 00:16:47.223 "dhgroup": "ffdhe8192" 00:16:47.223 } 00:16:47.223 } 00:16:47.223 ]' 00:16:47.223 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.481 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.481 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.481 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.481 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.481 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.481 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.481 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.739 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:16:47.739 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:16:48.679 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.679 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:48.679 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.679 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.679 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.679 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.679 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.679 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.937 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:48.937 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.937 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.937 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:48.937 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:48.937 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.937 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.937 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.937 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.937 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.937 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.937 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.937 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.874 00:16:49.874 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.874 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.874 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.132 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.132 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.132 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.132 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.132 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.132 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.132 { 00:16:50.132 "cntlid": 45, 00:16:50.132 "qid": 0, 00:16:50.132 "state": "enabled", 00:16:50.132 "thread": "nvmf_tgt_poll_group_000", 00:16:50.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:50.132 "listen_address": { 00:16:50.132 "trtype": "TCP", 00:16:50.132 "adrfam": "IPv4", 00:16:50.132 "traddr": "10.0.0.2", 00:16:50.132 "trsvcid": "4420" 00:16:50.132 }, 00:16:50.132 "peer_address": { 00:16:50.132 "trtype": "TCP", 00:16:50.132 "adrfam": "IPv4", 00:16:50.132 "traddr": "10.0.0.1", 00:16:50.132 "trsvcid": "53414" 00:16:50.132 }, 00:16:50.132 "auth": { 00:16:50.132 "state": "completed", 00:16:50.132 "digest": "sha256", 00:16:50.132 "dhgroup": "ffdhe8192" 00:16:50.132 } 00:16:50.132 } 00:16:50.132 ]' 00:16:50.132 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.132 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.132 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.132 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:50.132 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.132 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.132 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.133 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.390 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:16:50.390 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:16:51.329 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.329 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:51.329 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.329 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.329 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.329 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.329 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:51.330 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:51.590 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:51.590 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.590 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.590 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:51.590 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:51.590 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.590 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:51.590 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.590 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.590 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.590 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:51.590 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.590 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.531 00:16:52.531 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.531 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.531 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.790 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.790 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.790 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.790 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.790 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.790 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.790 { 00:16:52.790 "cntlid": 47, 00:16:52.790 "qid": 0, 00:16:52.790 "state": "enabled", 00:16:52.790 "thread": "nvmf_tgt_poll_group_000", 00:16:52.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:52.790 "listen_address": { 00:16:52.790 "trtype": "TCP", 00:16:52.790 "adrfam": "IPv4", 00:16:52.790 "traddr": "10.0.0.2", 00:16:52.790 "trsvcid": "4420" 00:16:52.790 }, 00:16:52.790 "peer_address": { 00:16:52.790 "trtype": "TCP", 00:16:52.790 "adrfam": "IPv4", 00:16:52.790 "traddr": "10.0.0.1", 00:16:52.790 "trsvcid": "53436" 00:16:52.790 }, 00:16:52.790 "auth": { 00:16:52.790 "state": "completed", 00:16:52.790 "digest": "sha256", 00:16:52.790 "dhgroup": "ffdhe8192" 00:16:52.790 } 00:16:52.790 } 00:16:52.790 ]' 00:16:52.790 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.790 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.790 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.790 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.790 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.048 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.048 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.048 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.306 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:16:53.306 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:16:54.242 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.242 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:54.242 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.242 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.242 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.242 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:54.242 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.242 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.242 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:54.242 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:54.500 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:54.500 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.500 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:54.500 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:54.500 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:54.500 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.500 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.500 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.500 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.500 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.500 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.500 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.500 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.758 00:16:54.758 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.758 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.758 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.017 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.017 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.017 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.017 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.017 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.017 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.017 { 00:16:55.017 "cntlid": 49, 00:16:55.017 "qid": 0, 00:16:55.017 "state": "enabled", 00:16:55.017 "thread": "nvmf_tgt_poll_group_000", 00:16:55.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:55.017 "listen_address": { 00:16:55.017 "trtype": "TCP", 00:16:55.017 "adrfam": "IPv4", 00:16:55.017 "traddr": "10.0.0.2", 00:16:55.017 "trsvcid": "4420" 00:16:55.017 }, 00:16:55.017 "peer_address": { 00:16:55.017 "trtype": "TCP", 00:16:55.017 "adrfam": "IPv4", 00:16:55.017 "traddr": "10.0.0.1", 00:16:55.017 "trsvcid": "53464" 00:16:55.017 }, 00:16:55.017 "auth": { 00:16:55.017 "state": "completed", 00:16:55.017 "digest": "sha384", 00:16:55.017 "dhgroup": "null" 00:16:55.017 } 00:16:55.017 } 00:16:55.017 ]' 00:16:55.017 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.276 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.276 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.276 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:55.276 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.276 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.276 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.276 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.534 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:16:55.534 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:16:56.473 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.473 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:56.473 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.473 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.473 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.473 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.473 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:56.473 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:56.730 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:56.730 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.730 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:56.730 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:56.730 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:56.730 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.730 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.730 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.730 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.730 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.730 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.730 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.730 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.988 00:16:56.988 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.988 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.988 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.246 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.246 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.246 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.246 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.246 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.246 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.246 { 00:16:57.246 "cntlid": 51, 00:16:57.246 "qid": 0, 00:16:57.246 "state": "enabled", 00:16:57.246 "thread": "nvmf_tgt_poll_group_000", 00:16:57.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:57.246 "listen_address": { 00:16:57.246 "trtype": "TCP", 00:16:57.246 "adrfam": "IPv4", 00:16:57.246 "traddr": "10.0.0.2", 00:16:57.246 "trsvcid": "4420" 00:16:57.246 }, 00:16:57.246 "peer_address": { 00:16:57.246 "trtype": "TCP", 00:16:57.246 "adrfam": "IPv4", 00:16:57.246 "traddr": "10.0.0.1", 00:16:57.246 "trsvcid": "57882" 00:16:57.246 }, 00:16:57.246 "auth": { 00:16:57.246 "state": "completed", 00:16:57.246 "digest": "sha384", 00:16:57.246 "dhgroup": "null" 00:16:57.246 } 00:16:57.246 } 00:16:57.246 ]' 00:16:57.246 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.246 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.246 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.503 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:57.503 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.503 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.503 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.503 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.761 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:16:57.761 19:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:16:58.697 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.697 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:58.697 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.697 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.697 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.697 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.697 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:58.697 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:58.955 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:58.955 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.955 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.955 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:58.955 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:58.955 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.955 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.955 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.955 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.955 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.955 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.955 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.955 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.213 00:16:59.213 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.213 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.213 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.471 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.471 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.471 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.471 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.471 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.471 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.471 { 00:16:59.471 "cntlid": 53, 00:16:59.471 "qid": 0, 00:16:59.471 "state": "enabled", 00:16:59.471 "thread": "nvmf_tgt_poll_group_000", 00:16:59.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:59.471 "listen_address": { 00:16:59.471 "trtype": "TCP", 00:16:59.471 "adrfam": "IPv4", 00:16:59.471 "traddr": "10.0.0.2", 00:16:59.471 "trsvcid": "4420" 00:16:59.471 }, 00:16:59.471 "peer_address": { 00:16:59.471 "trtype": "TCP", 00:16:59.471 "adrfam": "IPv4", 00:16:59.471 "traddr": "10.0.0.1", 00:16:59.471 "trsvcid": "57920" 00:16:59.471 }, 00:16:59.471 "auth": { 00:16:59.471 "state": "completed", 00:16:59.471 "digest": "sha384", 00:16:59.471 "dhgroup": "null" 00:16:59.471 } 00:16:59.471 } 00:16:59.471 ]' 00:16:59.471 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.471 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.471 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.727 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:59.727 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.727 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.728 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.728 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.984 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:16:59.984 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:17:00.916 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.917 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:00.917 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.917 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.917 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.917 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.917 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:00.917 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:01.174 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:01.174 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.174 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.174 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:01.174 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.174 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.174 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:01.174 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.174 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.174 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.174 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.174 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.174 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.432 00:17:01.432 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.432 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.432 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.690 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.690 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.690 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.690 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.690 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.690 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.690 { 00:17:01.690 "cntlid": 55, 00:17:01.690 "qid": 0, 00:17:01.690 "state": "enabled", 00:17:01.690 "thread": "nvmf_tgt_poll_group_000", 00:17:01.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:01.690 "listen_address": { 00:17:01.690 "trtype": "TCP", 00:17:01.690 "adrfam": "IPv4", 00:17:01.690 "traddr": "10.0.0.2", 00:17:01.690 "trsvcid": "4420" 00:17:01.690 }, 00:17:01.690 "peer_address": { 00:17:01.690 "trtype": "TCP", 00:17:01.690 "adrfam": "IPv4", 00:17:01.690 "traddr": "10.0.0.1", 00:17:01.690 "trsvcid": "57964" 00:17:01.690 }, 00:17:01.690 "auth": { 00:17:01.690 "state": "completed", 00:17:01.690 "digest": "sha384", 00:17:01.690 "dhgroup": "null" 00:17:01.690 } 00:17:01.690 } 00:17:01.690 ]' 00:17:01.690 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.690 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.690 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.690 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:01.690 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.948 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.948 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.948 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.206 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:17:02.206 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:17:03.145 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.145 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:03.145 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.145 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.145 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.145 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.145 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.145 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:03.145 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:03.404 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:03.404 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.404 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.404 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:03.404 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.404 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.404 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.404 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.404 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.404 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.404 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.404 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.404 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.663 00:17:03.663 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.663 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.663 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.922 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.922 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.922 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.922 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.922 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.922 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.922 { 00:17:03.922 "cntlid": 57, 00:17:03.922 "qid": 0, 00:17:03.922 "state": "enabled", 00:17:03.922 "thread": "nvmf_tgt_poll_group_000", 00:17:03.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:03.922 "listen_address": { 00:17:03.922 "trtype": "TCP", 00:17:03.922 "adrfam": "IPv4", 00:17:03.922 "traddr": "10.0.0.2", 00:17:03.922 "trsvcid": "4420" 00:17:03.922 }, 00:17:03.922 "peer_address": { 00:17:03.922 "trtype": "TCP", 00:17:03.922 "adrfam": "IPv4", 00:17:03.922 "traddr": "10.0.0.1", 00:17:03.922 "trsvcid": "58002" 00:17:03.922 }, 00:17:03.922 "auth": { 00:17:03.922 "state": "completed", 00:17:03.922 "digest": "sha384", 00:17:03.922 "dhgroup": "ffdhe2048" 00:17:03.922 } 00:17:03.922 } 00:17:03.922 ]' 00:17:03.922 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.922 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.922 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.922 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:03.922 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.180 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.180 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.180 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.440 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:17:04.440 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:17:05.376 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.376 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:05.376 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.376 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.376 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.376 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.376 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:05.376 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:05.634 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:05.634 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.634 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.634 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:05.634 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:05.634 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.634 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.634 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.634 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.634 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.634 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.634 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.634 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.891 00:17:05.891 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.891 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.891 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.149 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.149 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.149 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.149 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.149 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.149 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.149 { 00:17:06.149 "cntlid": 59, 00:17:06.149 "qid": 0, 00:17:06.149 "state": "enabled", 00:17:06.149 "thread": "nvmf_tgt_poll_group_000", 00:17:06.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:06.149 "listen_address": { 00:17:06.149 "trtype": "TCP", 00:17:06.149 "adrfam": "IPv4", 00:17:06.149 "traddr": "10.0.0.2", 00:17:06.149 "trsvcid": "4420" 00:17:06.149 }, 00:17:06.149 "peer_address": { 00:17:06.149 "trtype": "TCP", 00:17:06.149 "adrfam": "IPv4", 00:17:06.150 "traddr": "10.0.0.1", 00:17:06.150 "trsvcid": "58846" 00:17:06.150 }, 00:17:06.150 "auth": { 00:17:06.150 "state": "completed", 00:17:06.150 "digest": "sha384", 00:17:06.150 "dhgroup": "ffdhe2048" 00:17:06.150 } 00:17:06.150 } 00:17:06.150 ]' 00:17:06.150 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.150 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.150 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.150 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:06.150 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.408 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.408 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.408 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.667 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:17:06.667 19:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:17:07.604 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.604 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:07.604 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.604 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.604 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.604 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.604 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.604 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.862 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:07.862 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.862 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.862 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:07.862 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:07.862 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.862 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.862 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.862 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.862 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.862 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.862 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.862 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.121 00:17:08.121 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.121 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.121 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.379 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.379 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.379 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.379 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.379 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.379 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.379 { 00:17:08.379 "cntlid": 61, 00:17:08.379 "qid": 0, 00:17:08.379 "state": "enabled", 00:17:08.379 "thread": "nvmf_tgt_poll_group_000", 00:17:08.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:08.379 "listen_address": { 00:17:08.379 "trtype": "TCP", 00:17:08.379 "adrfam": "IPv4", 00:17:08.379 "traddr": "10.0.0.2", 00:17:08.379 "trsvcid": "4420" 00:17:08.379 }, 00:17:08.379 "peer_address": { 00:17:08.379 "trtype": "TCP", 00:17:08.379 "adrfam": "IPv4", 00:17:08.379 "traddr": "10.0.0.1", 00:17:08.379 "trsvcid": "58862" 00:17:08.379 }, 00:17:08.379 "auth": { 00:17:08.379 "state": "completed", 00:17:08.379 "digest": "sha384", 00:17:08.379 "dhgroup": "ffdhe2048" 00:17:08.379 } 00:17:08.379 } 00:17:08.379 ]' 00:17:08.379 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.379 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.379 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.379 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.379 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.379 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.379 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.379 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.947 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:17:08.947 19:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.882 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.140 00:17:10.400 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.400 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.400 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.663 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.663 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.663 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.663 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.663 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.663 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.663 { 00:17:10.663 "cntlid": 63, 00:17:10.663 "qid": 0, 00:17:10.663 "state": "enabled", 00:17:10.663 "thread": "nvmf_tgt_poll_group_000", 00:17:10.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:10.663 "listen_address": { 00:17:10.663 "trtype": "TCP", 00:17:10.663 "adrfam": "IPv4", 00:17:10.663 "traddr": "10.0.0.2", 00:17:10.663 "trsvcid": "4420" 00:17:10.663 }, 00:17:10.663 "peer_address": { 00:17:10.663 "trtype": "TCP", 00:17:10.663 "adrfam": "IPv4", 00:17:10.663 "traddr": "10.0.0.1", 00:17:10.663 "trsvcid": "58882" 00:17:10.663 }, 00:17:10.663 "auth": { 00:17:10.663 "state": "completed", 00:17:10.663 "digest": "sha384", 00:17:10.663 "dhgroup": "ffdhe2048" 00:17:10.663 } 00:17:10.663 } 00:17:10.663 ]' 00:17:10.663 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.663 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.663 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.663 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:10.663 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.663 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.663 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.663 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.935 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:17:10.935 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:17:11.911 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.911 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:11.911 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.911 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.911 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.911 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.911 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.911 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:11.911 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:12.208 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:12.208 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.208 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.208 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:12.208 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:12.208 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.208 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.208 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.208 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.208 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.208 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.208 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.208 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.846 00:17:12.846 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.846 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.846 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.846 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.846 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.846 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.846 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.846 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.846 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.846 { 00:17:12.846 "cntlid": 65, 00:17:12.846 "qid": 0, 00:17:12.846 "state": "enabled", 00:17:12.846 "thread": "nvmf_tgt_poll_group_000", 00:17:12.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:12.846 "listen_address": { 00:17:12.846 "trtype": "TCP", 00:17:12.846 "adrfam": "IPv4", 00:17:12.846 "traddr": "10.0.0.2", 00:17:12.846 "trsvcid": "4420" 00:17:12.846 }, 00:17:12.846 "peer_address": { 00:17:12.846 "trtype": "TCP", 00:17:12.846 "adrfam": "IPv4", 00:17:12.846 "traddr": "10.0.0.1", 00:17:12.846 "trsvcid": "58904" 00:17:12.846 }, 00:17:12.846 "auth": { 00:17:12.846 "state": "completed", 00:17:12.846 "digest": "sha384", 00:17:12.846 "dhgroup": "ffdhe3072" 00:17:12.846 } 00:17:12.846 } 00:17:12.846 ]' 00:17:12.846 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.148 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.149 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.149 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.149 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.149 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.149 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.149 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.451 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:17:13.451 19:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:17:14.088 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.358 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:14.358 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.358 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.358 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.358 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.358 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.358 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.675 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:14.675 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.675 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.675 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:14.675 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:14.675 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.675 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.675 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.675 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.675 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.675 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.675 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.675 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.977 00:17:14.977 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.977 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.977 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.267 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.267 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.267 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.267 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.267 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.267 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.267 { 00:17:15.267 "cntlid": 67, 00:17:15.267 "qid": 0, 00:17:15.267 "state": "enabled", 00:17:15.267 "thread": "nvmf_tgt_poll_group_000", 00:17:15.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:15.267 "listen_address": { 00:17:15.267 "trtype": "TCP", 00:17:15.267 "adrfam": "IPv4", 00:17:15.267 "traddr": "10.0.0.2", 00:17:15.267 "trsvcid": "4420" 00:17:15.267 }, 00:17:15.267 "peer_address": { 00:17:15.267 "trtype": "TCP", 00:17:15.267 "adrfam": "IPv4", 00:17:15.267 "traddr": "10.0.0.1", 00:17:15.267 "trsvcid": "58940" 00:17:15.267 }, 00:17:15.267 "auth": { 00:17:15.267 "state": "completed", 00:17:15.267 "digest": "sha384", 00:17:15.267 "dhgroup": "ffdhe3072" 00:17:15.267 } 00:17:15.267 } 00:17:15.267 ]' 00:17:15.267 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.267 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.267 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.267 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.267 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.267 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.267 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.267 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.557 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:17:15.557 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:17:16.489 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.489 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:16.489 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.489 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.489 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.489 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.489 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.489 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.745 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:16.745 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.745 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.745 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:16.745 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:16.745 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.745 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.745 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.745 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.745 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.745 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.745 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.745 19:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.309 00:17:17.309 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.309 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.309 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.566 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.566 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.566 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.566 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.566 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.566 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.566 { 00:17:17.566 "cntlid": 69, 00:17:17.566 "qid": 0, 00:17:17.566 "state": "enabled", 00:17:17.566 "thread": "nvmf_tgt_poll_group_000", 00:17:17.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:17.566 "listen_address": { 00:17:17.566 "trtype": "TCP", 00:17:17.566 "adrfam": "IPv4", 00:17:17.566 "traddr": "10.0.0.2", 00:17:17.566 "trsvcid": "4420" 00:17:17.566 }, 00:17:17.566 "peer_address": { 00:17:17.566 "trtype": "TCP", 00:17:17.566 "adrfam": "IPv4", 00:17:17.566 "traddr": "10.0.0.1", 00:17:17.566 "trsvcid": "51868" 00:17:17.566 }, 00:17:17.566 "auth": { 00:17:17.566 "state": "completed", 00:17:17.566 "digest": "sha384", 00:17:17.566 "dhgroup": "ffdhe3072" 00:17:17.566 } 00:17:17.566 } 00:17:17.566 ]' 00:17:17.566 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.566 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.566 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.566 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:17.566 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.566 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.566 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.566 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.823 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:17:17.823 19:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:17:18.759 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.759 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:18.759 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.759 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.759 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.759 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.759 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.759 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.018 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:19.018 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.018 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.018 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:19.018 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.018 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.018 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:19.018 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.018 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.018 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.018 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.018 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.018 19:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.276 00:17:19.276 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.276 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.276 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.844 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.844 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.844 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.844 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.844 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.844 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.844 { 00:17:19.844 "cntlid": 71, 00:17:19.844 "qid": 0, 00:17:19.844 "state": "enabled", 00:17:19.844 "thread": "nvmf_tgt_poll_group_000", 00:17:19.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:19.844 "listen_address": { 00:17:19.844 "trtype": "TCP", 00:17:19.844 "adrfam": "IPv4", 00:17:19.844 "traddr": "10.0.0.2", 00:17:19.844 "trsvcid": "4420" 00:17:19.844 }, 00:17:19.844 "peer_address": { 00:17:19.844 "trtype": "TCP", 00:17:19.844 "adrfam": "IPv4", 00:17:19.844 "traddr": "10.0.0.1", 00:17:19.844 "trsvcid": "51890" 00:17:19.844 }, 00:17:19.844 "auth": { 00:17:19.844 "state": "completed", 00:17:19.844 "digest": "sha384", 00:17:19.844 "dhgroup": "ffdhe3072" 00:17:19.844 } 00:17:19.844 } 00:17:19.844 ]' 00:17:19.844 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.844 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.844 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.844 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.844 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.844 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.844 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.844 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.102 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:17:20.102 19:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:17:21.035 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.035 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:21.035 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.035 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.035 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.035 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.035 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.035 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.035 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.292 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:21.292 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.292 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.292 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:21.292 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:21.292 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.292 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.292 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.292 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.292 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.292 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.292 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.292 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.550 00:17:21.809 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.809 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.809 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.068 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.068 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.068 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.068 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.068 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.068 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.068 { 00:17:22.068 "cntlid": 73, 00:17:22.068 "qid": 0, 00:17:22.068 "state": "enabled", 00:17:22.068 "thread": "nvmf_tgt_poll_group_000", 00:17:22.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:22.068 "listen_address": { 00:17:22.068 "trtype": "TCP", 00:17:22.068 "adrfam": "IPv4", 00:17:22.068 "traddr": "10.0.0.2", 00:17:22.068 "trsvcid": "4420" 00:17:22.068 }, 00:17:22.068 "peer_address": { 00:17:22.068 "trtype": "TCP", 00:17:22.068 "adrfam": "IPv4", 00:17:22.068 "traddr": "10.0.0.1", 00:17:22.068 "trsvcid": "51926" 00:17:22.068 }, 00:17:22.068 "auth": { 00:17:22.068 "state": "completed", 00:17:22.068 "digest": "sha384", 00:17:22.068 "dhgroup": "ffdhe4096" 00:17:22.068 } 00:17:22.068 } 00:17:22.068 ]' 00:17:22.068 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.068 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.068 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.068 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.068 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.068 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.068 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.068 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.327 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:17:22.327 19:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:17:23.263 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.264 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:23.264 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.264 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.264 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.264 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.264 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.264 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.522 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:23.522 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.522 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.522 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:23.522 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:23.522 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.522 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.522 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.522 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.522 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.522 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.522 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.522 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.090 00:17:24.090 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.090 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.090 19:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.090 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.348 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.348 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.348 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.348 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.348 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.348 { 00:17:24.348 "cntlid": 75, 00:17:24.348 "qid": 0, 00:17:24.348 "state": "enabled", 00:17:24.348 "thread": "nvmf_tgt_poll_group_000", 00:17:24.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:24.348 "listen_address": { 00:17:24.348 "trtype": "TCP", 00:17:24.348 "adrfam": "IPv4", 00:17:24.348 "traddr": "10.0.0.2", 00:17:24.348 "trsvcid": "4420" 00:17:24.348 }, 00:17:24.348 "peer_address": { 00:17:24.348 "trtype": "TCP", 00:17:24.348 "adrfam": "IPv4", 00:17:24.348 "traddr": "10.0.0.1", 00:17:24.348 "trsvcid": "51944" 00:17:24.348 }, 00:17:24.348 "auth": { 00:17:24.348 "state": "completed", 00:17:24.348 "digest": "sha384", 00:17:24.348 "dhgroup": "ffdhe4096" 00:17:24.348 } 00:17:24.348 } 00:17:24.348 ]' 00:17:24.348 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.348 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.348 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.348 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:24.348 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.348 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.348 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.348 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.607 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:17:24.607 19:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:17:25.540 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.540 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:25.540 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.540 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.540 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.540 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.540 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.540 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.799 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:25.799 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.799 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.799 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:25.799 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:25.799 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.799 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.799 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.799 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.799 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.799 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.799 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.799 19:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.057 00:17:26.057 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.057 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.057 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.623 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.623 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.623 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.623 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.623 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.623 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.623 { 00:17:26.623 "cntlid": 77, 00:17:26.623 "qid": 0, 00:17:26.623 "state": "enabled", 00:17:26.623 "thread": "nvmf_tgt_poll_group_000", 00:17:26.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:26.623 "listen_address": { 00:17:26.623 "trtype": "TCP", 00:17:26.623 "adrfam": "IPv4", 00:17:26.623 "traddr": "10.0.0.2", 00:17:26.623 "trsvcid": "4420" 00:17:26.623 }, 00:17:26.623 "peer_address": { 00:17:26.623 "trtype": "TCP", 00:17:26.623 "adrfam": "IPv4", 00:17:26.623 "traddr": "10.0.0.1", 00:17:26.623 "trsvcid": "42556" 00:17:26.623 }, 00:17:26.623 "auth": { 00:17:26.623 "state": "completed", 00:17:26.623 "digest": "sha384", 00:17:26.623 "dhgroup": "ffdhe4096" 00:17:26.623 } 00:17:26.623 } 00:17:26.623 ]' 00:17:26.623 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.623 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.623 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.623 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:26.623 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.623 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.623 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.623 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.881 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:17:26.882 19:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:17:27.814 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.814 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:27.814 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.814 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.814 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.814 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.814 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:27.814 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.071 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:28.071 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.071 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.071 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:28.071 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:28.071 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.071 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:28.071 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.071 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.071 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.071 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:28.071 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.071 19:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.330 00:17:28.330 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.330 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.330 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.589 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.589 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.589 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.589 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.589 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.589 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.589 { 00:17:28.589 "cntlid": 79, 00:17:28.589 "qid": 0, 00:17:28.589 "state": "enabled", 00:17:28.589 "thread": "nvmf_tgt_poll_group_000", 00:17:28.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:28.589 "listen_address": { 00:17:28.589 "trtype": "TCP", 00:17:28.589 "adrfam": "IPv4", 00:17:28.589 "traddr": "10.0.0.2", 00:17:28.589 "trsvcid": "4420" 00:17:28.589 }, 00:17:28.589 "peer_address": { 00:17:28.589 "trtype": "TCP", 00:17:28.589 "adrfam": "IPv4", 00:17:28.589 "traddr": "10.0.0.1", 00:17:28.589 "trsvcid": "42584" 00:17:28.589 }, 00:17:28.589 "auth": { 00:17:28.589 "state": "completed", 00:17:28.589 "digest": "sha384", 00:17:28.589 "dhgroup": "ffdhe4096" 00:17:28.589 } 00:17:28.589 } 00:17:28.589 ]' 00:17:28.589 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.848 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.848 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.848 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:28.848 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.848 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.848 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.848 19:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.106 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:17:29.106 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:17:30.040 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.040 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:30.040 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.040 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.040 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.040 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.040 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.040 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.040 19:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.296 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:30.296 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.296 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.296 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:30.296 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:30.296 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.296 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.296 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.296 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.296 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.296 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.296 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.296 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.860 00:17:30.860 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.860 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.860 19:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.117 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.117 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.117 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.117 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.117 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.117 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.117 { 00:17:31.117 "cntlid": 81, 00:17:31.117 "qid": 0, 00:17:31.117 "state": "enabled", 00:17:31.117 "thread": "nvmf_tgt_poll_group_000", 00:17:31.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:31.117 "listen_address": { 00:17:31.117 "trtype": "TCP", 00:17:31.117 "adrfam": "IPv4", 00:17:31.117 "traddr": "10.0.0.2", 00:17:31.117 "trsvcid": "4420" 00:17:31.117 }, 00:17:31.117 "peer_address": { 00:17:31.117 "trtype": "TCP", 00:17:31.117 "adrfam": "IPv4", 00:17:31.117 "traddr": "10.0.0.1", 00:17:31.117 "trsvcid": "42610" 00:17:31.117 }, 00:17:31.117 "auth": { 00:17:31.117 "state": "completed", 00:17:31.117 "digest": "sha384", 00:17:31.117 "dhgroup": "ffdhe6144" 00:17:31.117 } 00:17:31.117 } 00:17:31.117 ]' 00:17:31.117 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.117 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.117 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.117 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.117 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.117 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.117 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.117 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.376 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:17:31.376 19:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:17:32.320 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.320 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:32.320 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.320 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.320 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.320 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.320 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.320 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.883 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:32.883 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.883 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.883 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:32.883 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:32.883 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.883 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.883 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.883 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.883 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.883 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.883 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.883 19:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.451 00:17:33.451 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.451 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.451 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.708 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.708 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.708 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.708 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.708 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.708 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.708 { 00:17:33.708 "cntlid": 83, 00:17:33.708 "qid": 0, 00:17:33.708 "state": "enabled", 00:17:33.708 "thread": "nvmf_tgt_poll_group_000", 00:17:33.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:33.708 "listen_address": { 00:17:33.708 "trtype": "TCP", 00:17:33.708 "adrfam": "IPv4", 00:17:33.708 "traddr": "10.0.0.2", 00:17:33.708 "trsvcid": "4420" 00:17:33.708 }, 00:17:33.708 "peer_address": { 00:17:33.708 "trtype": "TCP", 00:17:33.708 "adrfam": "IPv4", 00:17:33.708 "traddr": "10.0.0.1", 00:17:33.708 "trsvcid": "42636" 00:17:33.708 }, 00:17:33.708 "auth": { 00:17:33.708 "state": "completed", 00:17:33.708 "digest": "sha384", 00:17:33.708 "dhgroup": "ffdhe6144" 00:17:33.708 } 00:17:33.708 } 00:17:33.708 ]' 00:17:33.708 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.708 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.708 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.708 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.709 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.709 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.709 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.709 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.967 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:17:33.967 19:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:17:34.904 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.904 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:34.904 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.904 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.904 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.904 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.905 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.905 19:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:35.177 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:35.177 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.177 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.177 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:35.177 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.177 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.177 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.177 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.177 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.177 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.177 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.177 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.177 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.742 00:17:35.742 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.742 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.742 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.999 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.999 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.999 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.999 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.999 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.999 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.999 { 00:17:35.999 "cntlid": 85, 00:17:35.999 "qid": 0, 00:17:35.999 "state": "enabled", 00:17:35.999 "thread": "nvmf_tgt_poll_group_000", 00:17:35.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:35.999 "listen_address": { 00:17:35.999 "trtype": "TCP", 00:17:35.999 "adrfam": "IPv4", 00:17:35.999 "traddr": "10.0.0.2", 00:17:35.999 "trsvcid": "4420" 00:17:35.999 }, 00:17:35.999 "peer_address": { 00:17:35.999 "trtype": "TCP", 00:17:35.999 "adrfam": "IPv4", 00:17:35.999 "traddr": "10.0.0.1", 00:17:35.999 "trsvcid": "46120" 00:17:35.999 }, 00:17:35.999 "auth": { 00:17:35.999 "state": "completed", 00:17:35.999 "digest": "sha384", 00:17:35.999 "dhgroup": "ffdhe6144" 00:17:35.999 } 00:17:35.999 } 00:17:35.999 ]' 00:17:35.999 19:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.999 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.999 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.257 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:36.257 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.257 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.257 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.257 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.516 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:17:36.516 19:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:17:37.454 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.454 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:37.454 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.454 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.454 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.454 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.454 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:37.454 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:37.712 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:37.712 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.712 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.712 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:37.712 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:37.712 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.712 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:37.712 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.712 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.712 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.712 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:37.712 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.712 19:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.282 00:17:38.282 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.282 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.282 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.541 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.541 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.541 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.542 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.542 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.542 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.542 { 00:17:38.542 "cntlid": 87, 00:17:38.542 "qid": 0, 00:17:38.542 "state": "enabled", 00:17:38.542 "thread": "nvmf_tgt_poll_group_000", 00:17:38.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:38.542 "listen_address": { 00:17:38.542 "trtype": "TCP", 00:17:38.542 "adrfam": "IPv4", 00:17:38.542 "traddr": "10.0.0.2", 00:17:38.542 "trsvcid": "4420" 00:17:38.542 }, 00:17:38.542 "peer_address": { 00:17:38.542 "trtype": "TCP", 00:17:38.542 "adrfam": "IPv4", 00:17:38.542 "traddr": "10.0.0.1", 00:17:38.542 "trsvcid": "46152" 00:17:38.542 }, 00:17:38.542 "auth": { 00:17:38.542 "state": "completed", 00:17:38.542 "digest": "sha384", 00:17:38.542 "dhgroup": "ffdhe6144" 00:17:38.542 } 00:17:38.542 } 00:17:38.542 ]' 00:17:38.542 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.542 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.542 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.800 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:38.800 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.800 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.800 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.800 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.059 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:17:39.059 19:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:17:39.996 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.996 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:39.996 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.996 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.996 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.996 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.996 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.996 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.996 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.254 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:40.254 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.254 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.254 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:40.254 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:40.254 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.254 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.254 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.254 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.254 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.254 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.254 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.254 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.188 00:17:41.188 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.188 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.188 19:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.445 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.445 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.445 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.445 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.445 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.445 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.445 { 00:17:41.445 "cntlid": 89, 00:17:41.445 "qid": 0, 00:17:41.445 "state": "enabled", 00:17:41.445 "thread": "nvmf_tgt_poll_group_000", 00:17:41.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:41.445 "listen_address": { 00:17:41.445 "trtype": "TCP", 00:17:41.445 "adrfam": "IPv4", 00:17:41.445 "traddr": "10.0.0.2", 00:17:41.445 "trsvcid": "4420" 00:17:41.445 }, 00:17:41.445 "peer_address": { 00:17:41.445 "trtype": "TCP", 00:17:41.445 "adrfam": "IPv4", 00:17:41.445 "traddr": "10.0.0.1", 00:17:41.445 "trsvcid": "46186" 00:17:41.445 }, 00:17:41.445 "auth": { 00:17:41.445 "state": "completed", 00:17:41.445 "digest": "sha384", 00:17:41.445 "dhgroup": "ffdhe8192" 00:17:41.445 } 00:17:41.445 } 00:17:41.445 ]' 00:17:41.445 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.445 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.445 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.445 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.445 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.445 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.445 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.445 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.702 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:17:41.702 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:17:42.640 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.640 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:42.640 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.640 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.640 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.640 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.640 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.640 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.898 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:42.898 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.898 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.898 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:42.898 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:42.898 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.898 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.898 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.898 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.898 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.898 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.898 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.898 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.834 00:17:43.834 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.834 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.834 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.093 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.093 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.093 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.093 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.093 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.093 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.093 { 00:17:44.093 "cntlid": 91, 00:17:44.093 "qid": 0, 00:17:44.093 "state": "enabled", 00:17:44.093 "thread": "nvmf_tgt_poll_group_000", 00:17:44.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:44.093 "listen_address": { 00:17:44.093 "trtype": "TCP", 00:17:44.093 "adrfam": "IPv4", 00:17:44.093 "traddr": "10.0.0.2", 00:17:44.093 "trsvcid": "4420" 00:17:44.093 }, 00:17:44.093 "peer_address": { 00:17:44.093 "trtype": "TCP", 00:17:44.093 "adrfam": "IPv4", 00:17:44.093 "traddr": "10.0.0.1", 00:17:44.093 "trsvcid": "46210" 00:17:44.093 }, 00:17:44.093 "auth": { 00:17:44.093 "state": "completed", 00:17:44.093 "digest": "sha384", 00:17:44.093 "dhgroup": "ffdhe8192" 00:17:44.093 } 00:17:44.093 } 00:17:44.093 ]' 00:17:44.093 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.093 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.093 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.093 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.093 19:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.093 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.352 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:17:44.352 19:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:17:45.289 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.289 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:45.289 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.289 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.289 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.289 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.289 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.289 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.858 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:45.858 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.858 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.858 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:45.858 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:45.858 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.858 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.858 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.858 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.858 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.858 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.858 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.858 19:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.424 00:17:46.424 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.424 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.424 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.992 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.992 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.992 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.992 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.992 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.992 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.993 { 00:17:46.993 "cntlid": 93, 00:17:46.993 "qid": 0, 00:17:46.993 "state": "enabled", 00:17:46.993 "thread": "nvmf_tgt_poll_group_000", 00:17:46.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:46.993 "listen_address": { 00:17:46.993 "trtype": "TCP", 00:17:46.993 "adrfam": "IPv4", 00:17:46.993 "traddr": "10.0.0.2", 00:17:46.993 "trsvcid": "4420" 00:17:46.993 }, 00:17:46.993 "peer_address": { 00:17:46.993 "trtype": "TCP", 00:17:46.993 "adrfam": "IPv4", 00:17:46.993 "traddr": "10.0.0.1", 00:17:46.993 "trsvcid": "46294" 00:17:46.993 }, 00:17:46.993 "auth": { 00:17:46.993 "state": "completed", 00:17:46.993 "digest": "sha384", 00:17:46.993 "dhgroup": "ffdhe8192" 00:17:46.993 } 00:17:46.993 } 00:17:46.993 ]' 00:17:46.993 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.993 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.993 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.993 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.993 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.993 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.993 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.993 19:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.250 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:17:47.250 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:17:48.187 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.187 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:48.187 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.187 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.187 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.187 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.187 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:48.187 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:48.457 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:48.457 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.457 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:48.457 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:48.457 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:48.457 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.457 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:48.457 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.457 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.457 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.457 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:48.457 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.457 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.393 00:17:49.393 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.393 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.393 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.650 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.650 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.650 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.650 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.650 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.650 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.650 { 00:17:49.650 "cntlid": 95, 00:17:49.650 "qid": 0, 00:17:49.650 "state": "enabled", 00:17:49.650 "thread": "nvmf_tgt_poll_group_000", 00:17:49.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:49.650 "listen_address": { 00:17:49.650 "trtype": "TCP", 00:17:49.650 "adrfam": "IPv4", 00:17:49.650 "traddr": "10.0.0.2", 00:17:49.650 "trsvcid": "4420" 00:17:49.650 }, 00:17:49.650 "peer_address": { 00:17:49.650 "trtype": "TCP", 00:17:49.650 "adrfam": "IPv4", 00:17:49.650 "traddr": "10.0.0.1", 00:17:49.650 "trsvcid": "46314" 00:17:49.650 }, 00:17:49.650 "auth": { 00:17:49.650 "state": "completed", 00:17:49.650 "digest": "sha384", 00:17:49.650 "dhgroup": "ffdhe8192" 00:17:49.650 } 00:17:49.650 } 00:17:49.650 ]' 00:17:49.650 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.650 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.650 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.650 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:49.650 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.650 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.650 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.650 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.907 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:17:49.907 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:17:50.837 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.837 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:50.837 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.837 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.837 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.837 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:50.837 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.837 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.838 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:50.838 19:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:51.096 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:51.096 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.096 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.096 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:51.096 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:51.096 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.096 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.096 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.096 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.096 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.096 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.096 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.096 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.661 00:17:51.661 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.661 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.661 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.920 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.920 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.920 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.920 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.920 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.920 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.920 { 00:17:51.920 "cntlid": 97, 00:17:51.920 "qid": 0, 00:17:51.920 "state": "enabled", 00:17:51.920 "thread": "nvmf_tgt_poll_group_000", 00:17:51.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:51.920 "listen_address": { 00:17:51.920 "trtype": "TCP", 00:17:51.920 "adrfam": "IPv4", 00:17:51.920 "traddr": "10.0.0.2", 00:17:51.920 "trsvcid": "4420" 00:17:51.920 }, 00:17:51.920 "peer_address": { 00:17:51.920 "trtype": "TCP", 00:17:51.920 "adrfam": "IPv4", 00:17:51.920 "traddr": "10.0.0.1", 00:17:51.920 "trsvcid": "46348" 00:17:51.920 }, 00:17:51.920 "auth": { 00:17:51.920 "state": "completed", 00:17:51.920 "digest": "sha512", 00:17:51.920 "dhgroup": "null" 00:17:51.920 } 00:17:51.920 } 00:17:51.920 ]' 00:17:51.920 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.920 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.920 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.920 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:51.920 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.920 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.920 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.920 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.179 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:17:52.179 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:17:53.115 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.115 19:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:53.115 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.115 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.115 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.115 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.115 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:53.115 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:53.373 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:53.373 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.373 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.373 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:53.373 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:53.373 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.373 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.373 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.373 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.373 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.373 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.373 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.373 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.940 00:17:53.940 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.940 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.940 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.199 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.199 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.199 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.199 19:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.199 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.199 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.199 { 00:17:54.199 "cntlid": 99, 00:17:54.199 "qid": 0, 00:17:54.199 "state": "enabled", 00:17:54.199 "thread": "nvmf_tgt_poll_group_000", 00:17:54.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:54.199 "listen_address": { 00:17:54.199 "trtype": "TCP", 00:17:54.199 "adrfam": "IPv4", 00:17:54.199 "traddr": "10.0.0.2", 00:17:54.199 "trsvcid": "4420" 00:17:54.199 }, 00:17:54.199 "peer_address": { 00:17:54.199 "trtype": "TCP", 00:17:54.199 "adrfam": "IPv4", 00:17:54.199 "traddr": "10.0.0.1", 00:17:54.199 "trsvcid": "46386" 00:17:54.199 }, 00:17:54.199 "auth": { 00:17:54.199 "state": "completed", 00:17:54.199 "digest": "sha512", 00:17:54.199 "dhgroup": "null" 00:17:54.199 } 00:17:54.199 } 00:17:54.199 ]' 00:17:54.199 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.199 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.199 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.199 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:54.199 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.199 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.199 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.199 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.457 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:17:54.457 19:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:17:55.417 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.417 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:55.417 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.417 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.417 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.417 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.417 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:55.417 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:55.675 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:55.675 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.675 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.675 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:55.675 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:55.675 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.675 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.676 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.676 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.676 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.676 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.676 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.676 19:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.245 00:17:56.245 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.245 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.245 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.245 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.245 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.245 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.245 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.503 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.503 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.503 { 00:17:56.503 "cntlid": 101, 00:17:56.503 "qid": 0, 00:17:56.503 "state": "enabled", 00:17:56.503 "thread": "nvmf_tgt_poll_group_000", 00:17:56.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:56.503 "listen_address": { 00:17:56.503 "trtype": "TCP", 00:17:56.503 "adrfam": "IPv4", 00:17:56.503 "traddr": "10.0.0.2", 00:17:56.503 "trsvcid": "4420" 00:17:56.503 }, 00:17:56.503 "peer_address": { 00:17:56.503 "trtype": "TCP", 00:17:56.503 "adrfam": "IPv4", 00:17:56.503 "traddr": "10.0.0.1", 00:17:56.503 "trsvcid": "43336" 00:17:56.503 }, 00:17:56.503 "auth": { 00:17:56.503 "state": "completed", 00:17:56.503 "digest": "sha512", 00:17:56.503 "dhgroup": "null" 00:17:56.503 } 00:17:56.503 } 00:17:56.503 ]' 00:17:56.503 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.503 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.503 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.503 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:56.503 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.503 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.503 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.503 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.762 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:17:56.762 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:17:57.699 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.699 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:57.699 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.699 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.699 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.699 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.699 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:57.699 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:57.958 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:57.958 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.959 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.959 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:57.959 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:57.959 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.959 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:57.959 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.959 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.959 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.959 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:57.959 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.959 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.217 00:17:58.217 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.217 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.217 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.475 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.475 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.475 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.475 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.475 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.475 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.475 { 00:17:58.475 "cntlid": 103, 00:17:58.475 "qid": 0, 00:17:58.475 "state": "enabled", 00:17:58.475 "thread": "nvmf_tgt_poll_group_000", 00:17:58.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:58.475 "listen_address": { 00:17:58.475 "trtype": "TCP", 00:17:58.475 "adrfam": "IPv4", 00:17:58.475 "traddr": "10.0.0.2", 00:17:58.475 "trsvcid": "4420" 00:17:58.475 }, 00:17:58.475 "peer_address": { 00:17:58.475 "trtype": "TCP", 00:17:58.475 "adrfam": "IPv4", 00:17:58.475 "traddr": "10.0.0.1", 00:17:58.475 "trsvcid": "43360" 00:17:58.475 }, 00:17:58.475 "auth": { 00:17:58.475 "state": "completed", 00:17:58.475 "digest": "sha512", 00:17:58.475 "dhgroup": "null" 00:17:58.475 } 00:17:58.475 } 00:17:58.475 ]' 00:17:58.475 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.475 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.475 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.733 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:58.733 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.733 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.733 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.733 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.991 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:17:58.991 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:17:59.930 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.930 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:59.930 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.930 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.930 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.930 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.930 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.930 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:59.930 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.187 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:00.187 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.187 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.187 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:00.187 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:00.187 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.187 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.187 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.187 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.187 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.187 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.187 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.187 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.444 00:18:00.444 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.444 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.444 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.702 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.702 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.702 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.702 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.702 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.702 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.702 { 00:18:00.702 "cntlid": 105, 00:18:00.702 "qid": 0, 00:18:00.702 "state": "enabled", 00:18:00.702 "thread": "nvmf_tgt_poll_group_000", 00:18:00.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:00.702 "listen_address": { 00:18:00.702 "trtype": "TCP", 00:18:00.702 "adrfam": "IPv4", 00:18:00.702 "traddr": "10.0.0.2", 00:18:00.702 "trsvcid": "4420" 00:18:00.702 }, 00:18:00.702 "peer_address": { 00:18:00.702 "trtype": "TCP", 00:18:00.702 "adrfam": "IPv4", 00:18:00.702 "traddr": "10.0.0.1", 00:18:00.702 "trsvcid": "43396" 00:18:00.702 }, 00:18:00.702 "auth": { 00:18:00.702 "state": "completed", 00:18:00.702 "digest": "sha512", 00:18:00.702 "dhgroup": "ffdhe2048" 00:18:00.702 } 00:18:00.702 } 00:18:00.702 ]' 00:18:00.702 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.960 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.960 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.960 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:00.960 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.960 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.960 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.960 19:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.218 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:18:01.218 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:18:02.263 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.263 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:02.263 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.263 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.263 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.263 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.263 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:02.263 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:02.546 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:02.546 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.546 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.546 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:02.546 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:02.546 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.546 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.546 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.546 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.546 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.546 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.546 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.546 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.822 00:18:02.823 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.823 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.823 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.087 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.088 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.088 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.088 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.088 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.088 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.088 { 00:18:03.088 "cntlid": 107, 00:18:03.088 "qid": 0, 00:18:03.088 "state": "enabled", 00:18:03.088 "thread": "nvmf_tgt_poll_group_000", 00:18:03.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:03.088 "listen_address": { 00:18:03.088 "trtype": "TCP", 00:18:03.088 "adrfam": "IPv4", 00:18:03.088 "traddr": "10.0.0.2", 00:18:03.088 "trsvcid": "4420" 00:18:03.088 }, 00:18:03.088 "peer_address": { 00:18:03.088 "trtype": "TCP", 00:18:03.088 "adrfam": "IPv4", 00:18:03.088 "traddr": "10.0.0.1", 00:18:03.088 "trsvcid": "43402" 00:18:03.088 }, 00:18:03.088 "auth": { 00:18:03.088 "state": "completed", 00:18:03.088 "digest": "sha512", 00:18:03.088 "dhgroup": "ffdhe2048" 00:18:03.088 } 00:18:03.088 } 00:18:03.088 ]' 00:18:03.088 19:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.088 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.088 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.088 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:03.088 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.088 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.088 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.088 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.347 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:18:03.347 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:18:04.284 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.284 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:04.284 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.284 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.284 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.284 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.284 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:04.284 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:04.543 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:04.543 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.543 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.544 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:04.544 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:04.544 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.544 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.544 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.544 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.544 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.544 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.544 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.544 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.109 00:18:05.109 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.109 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.109 19:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.368 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.368 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.368 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.368 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.368 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.368 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.368 { 00:18:05.368 "cntlid": 109, 00:18:05.368 "qid": 0, 00:18:05.368 "state": "enabled", 00:18:05.368 "thread": "nvmf_tgt_poll_group_000", 00:18:05.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:05.368 "listen_address": { 00:18:05.368 "trtype": "TCP", 00:18:05.368 "adrfam": "IPv4", 00:18:05.368 "traddr": "10.0.0.2", 00:18:05.368 "trsvcid": "4420" 00:18:05.368 }, 00:18:05.368 "peer_address": { 00:18:05.368 "trtype": "TCP", 00:18:05.368 "adrfam": "IPv4", 00:18:05.368 "traddr": "10.0.0.1", 00:18:05.368 "trsvcid": "43436" 00:18:05.368 }, 00:18:05.368 "auth": { 00:18:05.368 "state": "completed", 00:18:05.368 "digest": "sha512", 00:18:05.368 "dhgroup": "ffdhe2048" 00:18:05.368 } 00:18:05.368 } 00:18:05.368 ]' 00:18:05.368 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.368 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.368 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.368 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:05.368 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.368 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.368 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.368 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.625 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:18:05.625 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:18:06.559 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.560 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:06.560 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.560 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.560 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.560 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.560 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:06.560 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:06.819 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:06.819 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.819 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.819 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:06.819 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:06.820 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.820 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:06.820 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.820 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.820 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.820 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:06.820 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.820 19:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.387 00:18:07.387 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.387 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.387 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.644 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.644 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.644 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.644 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.644 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.644 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.644 { 00:18:07.644 "cntlid": 111, 00:18:07.644 "qid": 0, 00:18:07.644 "state": "enabled", 00:18:07.644 "thread": "nvmf_tgt_poll_group_000", 00:18:07.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:07.644 "listen_address": { 00:18:07.644 "trtype": "TCP", 00:18:07.644 "adrfam": "IPv4", 00:18:07.644 "traddr": "10.0.0.2", 00:18:07.644 "trsvcid": "4420" 00:18:07.644 }, 00:18:07.644 "peer_address": { 00:18:07.644 "trtype": "TCP", 00:18:07.644 "adrfam": "IPv4", 00:18:07.644 "traddr": "10.0.0.1", 00:18:07.644 "trsvcid": "60984" 00:18:07.644 }, 00:18:07.644 "auth": { 00:18:07.644 "state": "completed", 00:18:07.644 "digest": "sha512", 00:18:07.644 "dhgroup": "ffdhe2048" 00:18:07.644 } 00:18:07.644 } 00:18:07.644 ]' 00:18:07.644 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.644 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.644 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.644 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:07.644 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.644 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.644 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.644 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.910 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:18:07.910 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:18:08.849 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.849 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:08.849 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.849 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.849 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.849 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.849 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.849 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:08.849 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:09.108 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:09.108 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.108 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.108 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:09.108 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:09.108 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.108 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.108 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.108 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.108 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.108 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.108 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.108 19:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.366 00:18:09.366 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.366 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.366 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.624 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.625 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.625 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.625 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.625 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.625 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.625 { 00:18:09.625 "cntlid": 113, 00:18:09.625 "qid": 0, 00:18:09.625 "state": "enabled", 00:18:09.625 "thread": "nvmf_tgt_poll_group_000", 00:18:09.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:09.625 "listen_address": { 00:18:09.625 "trtype": "TCP", 00:18:09.625 "adrfam": "IPv4", 00:18:09.625 "traddr": "10.0.0.2", 00:18:09.625 "trsvcid": "4420" 00:18:09.625 }, 00:18:09.625 "peer_address": { 00:18:09.625 "trtype": "TCP", 00:18:09.625 "adrfam": "IPv4", 00:18:09.625 "traddr": "10.0.0.1", 00:18:09.625 "trsvcid": "32776" 00:18:09.625 }, 00:18:09.625 "auth": { 00:18:09.625 "state": "completed", 00:18:09.625 "digest": "sha512", 00:18:09.625 "dhgroup": "ffdhe3072" 00:18:09.625 } 00:18:09.625 } 00:18:09.625 ]' 00:18:09.625 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.883 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.883 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.883 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:09.883 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.884 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.884 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.884 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.142 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:18:10.142 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:18:11.081 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.081 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:11.081 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.081 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.081 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.081 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.081 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.081 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.339 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:11.339 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.339 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.339 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:11.339 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:11.339 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.339 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.339 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.339 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.339 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.339 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.339 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.339 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.598 00:18:11.598 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.598 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.598 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.856 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.856 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.856 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.856 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.856 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.856 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.856 { 00:18:11.856 "cntlid": 115, 00:18:11.856 "qid": 0, 00:18:11.856 "state": "enabled", 00:18:11.856 "thread": "nvmf_tgt_poll_group_000", 00:18:11.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:11.856 "listen_address": { 00:18:11.856 "trtype": "TCP", 00:18:11.856 "adrfam": "IPv4", 00:18:11.856 "traddr": "10.0.0.2", 00:18:11.856 "trsvcid": "4420" 00:18:11.856 }, 00:18:11.856 "peer_address": { 00:18:11.856 "trtype": "TCP", 00:18:11.856 "adrfam": "IPv4", 00:18:11.856 "traddr": "10.0.0.1", 00:18:11.856 "trsvcid": "32794" 00:18:11.856 }, 00:18:11.856 "auth": { 00:18:11.856 "state": "completed", 00:18:11.856 "digest": "sha512", 00:18:11.856 "dhgroup": "ffdhe3072" 00:18:11.856 } 00:18:11.856 } 00:18:11.856 ]' 00:18:11.856 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.115 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.115 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.115 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:12.115 19:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.115 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.115 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.115 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.373 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:18:12.373 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:18:13.313 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.313 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:13.313 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.313 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.313 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.313 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.313 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:13.313 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:13.572 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:13.572 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.572 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.572 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:13.572 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:13.572 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.572 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.572 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.572 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.572 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.572 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.572 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.572 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.830 00:18:13.830 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.830 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.830 19:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.089 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.089 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.089 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.089 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.089 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.089 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.089 { 00:18:14.089 "cntlid": 117, 00:18:14.089 "qid": 0, 00:18:14.089 "state": "enabled", 00:18:14.089 "thread": "nvmf_tgt_poll_group_000", 00:18:14.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:14.089 "listen_address": { 00:18:14.089 "trtype": "TCP", 00:18:14.089 "adrfam": "IPv4", 00:18:14.089 "traddr": "10.0.0.2", 00:18:14.089 "trsvcid": "4420" 00:18:14.089 }, 00:18:14.089 "peer_address": { 00:18:14.089 "trtype": "TCP", 00:18:14.089 "adrfam": "IPv4", 00:18:14.089 "traddr": "10.0.0.1", 00:18:14.089 "trsvcid": "32820" 00:18:14.089 }, 00:18:14.089 "auth": { 00:18:14.089 "state": "completed", 00:18:14.089 "digest": "sha512", 00:18:14.089 "dhgroup": "ffdhe3072" 00:18:14.089 } 00:18:14.089 } 00:18:14.089 ]' 00:18:14.089 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.089 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.089 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.347 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:14.347 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.347 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.347 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.347 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.606 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:18:14.606 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:18:15.549 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.549 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:15.549 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.549 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.549 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.549 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.549 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:15.549 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:15.807 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:15.807 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.807 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.807 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:15.807 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.807 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.807 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:15.807 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.807 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.807 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.807 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.807 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.807 19:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.065 00:18:16.065 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.065 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.065 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.323 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.323 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.323 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.323 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.323 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.323 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.323 { 00:18:16.323 "cntlid": 119, 00:18:16.323 "qid": 0, 00:18:16.323 "state": "enabled", 00:18:16.323 "thread": "nvmf_tgt_poll_group_000", 00:18:16.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:16.323 "listen_address": { 00:18:16.323 "trtype": "TCP", 00:18:16.323 "adrfam": "IPv4", 00:18:16.323 "traddr": "10.0.0.2", 00:18:16.323 "trsvcid": "4420" 00:18:16.323 }, 00:18:16.323 "peer_address": { 00:18:16.323 "trtype": "TCP", 00:18:16.323 "adrfam": "IPv4", 00:18:16.323 "traddr": "10.0.0.1", 00:18:16.323 "trsvcid": "56732" 00:18:16.323 }, 00:18:16.323 "auth": { 00:18:16.323 "state": "completed", 00:18:16.323 "digest": "sha512", 00:18:16.323 "dhgroup": "ffdhe3072" 00:18:16.323 } 00:18:16.323 } 00:18:16.323 ]' 00:18:16.323 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.581 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.581 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.581 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:16.581 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.581 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.581 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.581 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.840 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:18:16.840 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:18:17.783 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.783 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:17.783 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.783 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.783 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.783 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.783 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.783 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:17.783 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:18.042 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:18.042 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.042 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.042 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:18.042 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:18.042 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.042 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.042 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.042 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.042 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.042 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.042 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.042 19:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.300 00:18:18.300 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.300 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.300 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.879 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.879 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.879 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.879 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.879 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.879 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.879 { 00:18:18.879 "cntlid": 121, 00:18:18.879 "qid": 0, 00:18:18.879 "state": "enabled", 00:18:18.879 "thread": "nvmf_tgt_poll_group_000", 00:18:18.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:18.880 "listen_address": { 00:18:18.880 "trtype": "TCP", 00:18:18.880 "adrfam": "IPv4", 00:18:18.880 "traddr": "10.0.0.2", 00:18:18.880 "trsvcid": "4420" 00:18:18.880 }, 00:18:18.880 "peer_address": { 00:18:18.880 "trtype": "TCP", 00:18:18.880 "adrfam": "IPv4", 00:18:18.880 "traddr": "10.0.0.1", 00:18:18.880 "trsvcid": "56760" 00:18:18.880 }, 00:18:18.880 "auth": { 00:18:18.880 "state": "completed", 00:18:18.880 "digest": "sha512", 00:18:18.880 "dhgroup": "ffdhe4096" 00:18:18.880 } 00:18:18.880 } 00:18:18.880 ]' 00:18:18.880 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.880 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.880 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.880 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:18.880 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.880 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.880 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.880 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.143 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:18:19.143 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:18:20.080 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.081 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:20.081 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.081 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.081 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.081 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.081 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:20.081 19:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:20.339 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:20.339 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.339 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.339 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:20.339 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:20.339 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.339 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.339 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.339 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.339 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.339 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.339 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.339 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.596 00:18:20.596 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.596 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.596 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.854 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.854 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.854 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.854 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.854 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.854 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.854 { 00:18:20.854 "cntlid": 123, 00:18:20.854 "qid": 0, 00:18:20.854 "state": "enabled", 00:18:20.854 "thread": "nvmf_tgt_poll_group_000", 00:18:20.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:20.854 "listen_address": { 00:18:20.854 "trtype": "TCP", 00:18:20.854 "adrfam": "IPv4", 00:18:20.854 "traddr": "10.0.0.2", 00:18:20.854 "trsvcid": "4420" 00:18:20.854 }, 00:18:20.854 "peer_address": { 00:18:20.854 "trtype": "TCP", 00:18:20.854 "adrfam": "IPv4", 00:18:20.854 "traddr": "10.0.0.1", 00:18:20.854 "trsvcid": "56778" 00:18:20.854 }, 00:18:20.854 "auth": { 00:18:20.854 "state": "completed", 00:18:20.854 "digest": "sha512", 00:18:20.854 "dhgroup": "ffdhe4096" 00:18:20.854 } 00:18:20.854 } 00:18:20.854 ]' 00:18:20.854 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.112 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.112 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.112 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:21.112 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.112 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.112 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.112 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.371 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:18:21.371 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:18:22.310 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.310 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:22.310 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.310 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.310 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.310 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.310 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:22.310 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:22.568 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:22.568 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.568 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.568 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:22.568 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:22.568 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.568 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.568 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.568 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.568 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.568 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.568 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.568 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.136 00:18:23.136 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.136 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.136 19:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.136 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.136 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.136 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.136 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.136 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.136 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.136 { 00:18:23.136 "cntlid": 125, 00:18:23.136 "qid": 0, 00:18:23.136 "state": "enabled", 00:18:23.136 "thread": "nvmf_tgt_poll_group_000", 00:18:23.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:23.136 "listen_address": { 00:18:23.136 "trtype": "TCP", 00:18:23.136 "adrfam": "IPv4", 00:18:23.136 "traddr": "10.0.0.2", 00:18:23.136 "trsvcid": "4420" 00:18:23.136 }, 00:18:23.136 "peer_address": { 00:18:23.136 "trtype": "TCP", 00:18:23.136 "adrfam": "IPv4", 00:18:23.136 "traddr": "10.0.0.1", 00:18:23.136 "trsvcid": "56810" 00:18:23.136 }, 00:18:23.136 "auth": { 00:18:23.136 "state": "completed", 00:18:23.136 "digest": "sha512", 00:18:23.136 "dhgroup": "ffdhe4096" 00:18:23.136 } 00:18:23.136 } 00:18:23.136 ]' 00:18:23.136 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.395 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.395 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.395 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:23.395 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.395 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.395 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.395 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.653 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:18:23.653 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:18:24.593 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.593 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:24.593 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.593 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.593 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.593 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.593 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:24.593 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:24.851 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:24.851 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.851 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.851 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:24.851 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:24.851 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.851 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:24.851 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.851 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.851 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.851 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:24.851 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.851 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.421 00:18:25.421 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.421 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.421 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.421 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.421 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.421 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.421 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.421 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.421 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.421 { 00:18:25.421 "cntlid": 127, 00:18:25.421 "qid": 0, 00:18:25.421 "state": "enabled", 00:18:25.421 "thread": "nvmf_tgt_poll_group_000", 00:18:25.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:25.421 "listen_address": { 00:18:25.421 "trtype": "TCP", 00:18:25.421 "adrfam": "IPv4", 00:18:25.421 "traddr": "10.0.0.2", 00:18:25.421 "trsvcid": "4420" 00:18:25.421 }, 00:18:25.421 "peer_address": { 00:18:25.421 "trtype": "TCP", 00:18:25.421 "adrfam": "IPv4", 00:18:25.421 "traddr": "10.0.0.1", 00:18:25.421 "trsvcid": "59136" 00:18:25.421 }, 00:18:25.421 "auth": { 00:18:25.421 "state": "completed", 00:18:25.421 "digest": "sha512", 00:18:25.421 "dhgroup": "ffdhe4096" 00:18:25.421 } 00:18:25.421 } 00:18:25.421 ]' 00:18:25.421 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.679 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.679 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.679 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:25.679 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.679 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.679 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.679 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.937 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:18:25.937 19:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:18:26.873 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.874 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:26.874 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.874 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.874 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.874 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.874 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.874 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:26.874 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:27.132 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:27.132 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.132 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:27.132 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:27.132 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:27.132 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.132 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.132 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.132 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.132 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.132 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.132 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.132 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.699 00:18:27.699 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.699 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.699 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.957 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.957 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.957 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.957 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.957 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.957 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.957 { 00:18:27.957 "cntlid": 129, 00:18:27.957 "qid": 0, 00:18:27.957 "state": "enabled", 00:18:27.957 "thread": "nvmf_tgt_poll_group_000", 00:18:27.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:27.957 "listen_address": { 00:18:27.957 "trtype": "TCP", 00:18:27.957 "adrfam": "IPv4", 00:18:27.957 "traddr": "10.0.0.2", 00:18:27.957 "trsvcid": "4420" 00:18:27.957 }, 00:18:27.957 "peer_address": { 00:18:27.957 "trtype": "TCP", 00:18:27.957 "adrfam": "IPv4", 00:18:27.957 "traddr": "10.0.0.1", 00:18:27.957 "trsvcid": "59172" 00:18:27.957 }, 00:18:27.957 "auth": { 00:18:27.957 "state": "completed", 00:18:27.957 "digest": "sha512", 00:18:27.957 "dhgroup": "ffdhe6144" 00:18:27.957 } 00:18:27.957 } 00:18:27.957 ]' 00:18:27.957 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.957 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.957 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.957 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:27.957 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.957 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.957 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.957 19:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.215 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:18:28.215 19:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:18:29.153 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.153 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:29.153 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.153 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.412 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.412 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.412 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:29.412 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:29.670 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:29.670 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.670 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:29.670 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:29.670 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:29.670 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.670 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.670 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.670 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.670 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.670 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.670 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.671 19:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.240 00:18:30.240 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.240 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.240 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.498 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.498 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.498 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.498 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.498 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.498 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.498 { 00:18:30.498 "cntlid": 131, 00:18:30.498 "qid": 0, 00:18:30.498 "state": "enabled", 00:18:30.498 "thread": "nvmf_tgt_poll_group_000", 00:18:30.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:30.498 "listen_address": { 00:18:30.498 "trtype": "TCP", 00:18:30.498 "adrfam": "IPv4", 00:18:30.498 "traddr": "10.0.0.2", 00:18:30.498 "trsvcid": "4420" 00:18:30.498 }, 00:18:30.498 "peer_address": { 00:18:30.498 "trtype": "TCP", 00:18:30.498 "adrfam": "IPv4", 00:18:30.498 "traddr": "10.0.0.1", 00:18:30.498 "trsvcid": "59200" 00:18:30.498 }, 00:18:30.498 "auth": { 00:18:30.498 "state": "completed", 00:18:30.498 "digest": "sha512", 00:18:30.498 "dhgroup": "ffdhe6144" 00:18:30.498 } 00:18:30.498 } 00:18:30.498 ]' 00:18:30.498 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.498 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.498 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.498 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:30.498 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.498 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.498 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.498 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.755 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:18:30.755 19:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:18:31.687 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.687 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:31.687 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.687 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.687 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.687 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.687 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:31.687 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:31.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:31.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:31.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:31.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:31.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.945 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.510 00:18:32.511 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.511 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.511 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.769 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.769 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.769 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.769 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.769 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.769 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.769 { 00:18:32.769 "cntlid": 133, 00:18:32.769 "qid": 0, 00:18:32.769 "state": "enabled", 00:18:32.769 "thread": "nvmf_tgt_poll_group_000", 00:18:32.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:32.769 "listen_address": { 00:18:32.769 "trtype": "TCP", 00:18:32.769 "adrfam": "IPv4", 00:18:32.769 "traddr": "10.0.0.2", 00:18:32.769 "trsvcid": "4420" 00:18:32.769 }, 00:18:32.769 "peer_address": { 00:18:32.769 "trtype": "TCP", 00:18:32.769 "adrfam": "IPv4", 00:18:32.769 "traddr": "10.0.0.1", 00:18:32.769 "trsvcid": "59234" 00:18:32.769 }, 00:18:32.769 "auth": { 00:18:32.769 "state": "completed", 00:18:32.769 "digest": "sha512", 00:18:32.769 "dhgroup": "ffdhe6144" 00:18:32.769 } 00:18:32.769 } 00:18:32.769 ]' 00:18:32.769 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.769 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.769 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.769 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:32.769 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.769 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.769 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.769 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.029 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:18:33.029 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:18:33.967 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.967 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:33.967 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.967 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.967 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.967 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.968 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:33.968 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:34.226 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:34.226 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.226 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.226 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:34.226 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:34.226 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.226 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:34.226 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.226 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.226 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.226 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:34.226 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.226 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.793 00:18:34.793 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.793 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.793 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.052 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.052 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.052 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.052 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.052 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.052 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.052 { 00:18:35.052 "cntlid": 135, 00:18:35.052 "qid": 0, 00:18:35.052 "state": "enabled", 00:18:35.052 "thread": "nvmf_tgt_poll_group_000", 00:18:35.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:35.052 "listen_address": { 00:18:35.052 "trtype": "TCP", 00:18:35.052 "adrfam": "IPv4", 00:18:35.052 "traddr": "10.0.0.2", 00:18:35.052 "trsvcid": "4420" 00:18:35.052 }, 00:18:35.052 "peer_address": { 00:18:35.052 "trtype": "TCP", 00:18:35.052 "adrfam": "IPv4", 00:18:35.052 "traddr": "10.0.0.1", 00:18:35.052 "trsvcid": "59252" 00:18:35.052 }, 00:18:35.052 "auth": { 00:18:35.052 "state": "completed", 00:18:35.052 "digest": "sha512", 00:18:35.052 "dhgroup": "ffdhe6144" 00:18:35.052 } 00:18:35.052 } 00:18:35.052 ]' 00:18:35.052 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.052 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.052 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.311 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:35.311 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.311 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.311 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.311 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.570 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:18:35.570 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:18:36.506 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.506 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:36.506 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.506 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.506 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.506 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.506 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.506 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:36.506 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:36.765 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:36.765 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.765 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:36.765 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:36.765 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:36.765 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.765 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.765 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.765 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.765 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.765 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.765 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.765 19:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.716 00:18:37.716 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.716 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.716 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.716 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.716 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.716 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.716 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.716 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.716 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.716 { 00:18:37.716 "cntlid": 137, 00:18:37.716 "qid": 0, 00:18:37.716 "state": "enabled", 00:18:37.716 "thread": "nvmf_tgt_poll_group_000", 00:18:37.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:37.716 "listen_address": { 00:18:37.716 "trtype": "TCP", 00:18:37.716 "adrfam": "IPv4", 00:18:37.716 "traddr": "10.0.0.2", 00:18:37.716 "trsvcid": "4420" 00:18:37.716 }, 00:18:37.716 "peer_address": { 00:18:37.716 "trtype": "TCP", 00:18:37.716 "adrfam": "IPv4", 00:18:37.716 "traddr": "10.0.0.1", 00:18:37.716 "trsvcid": "36948" 00:18:37.716 }, 00:18:37.716 "auth": { 00:18:37.716 "state": "completed", 00:18:37.716 "digest": "sha512", 00:18:37.716 "dhgroup": "ffdhe8192" 00:18:37.716 } 00:18:37.716 } 00:18:37.716 ]' 00:18:37.716 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.975 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.975 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.975 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:37.975 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.975 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.975 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.975 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.233 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:18:38.233 19:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:18:39.171 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.171 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:39.171 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.171 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.171 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.171 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.171 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:39.171 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:39.429 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:39.429 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.429 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.429 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:39.429 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:39.429 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.429 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.429 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.429 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.429 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.429 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.429 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.429 19:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.362 00:18:40.362 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.362 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.362 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.619 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.619 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.619 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.619 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.619 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.619 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.619 { 00:18:40.619 "cntlid": 139, 00:18:40.619 "qid": 0, 00:18:40.619 "state": "enabled", 00:18:40.619 "thread": "nvmf_tgt_poll_group_000", 00:18:40.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:40.619 "listen_address": { 00:18:40.619 "trtype": "TCP", 00:18:40.619 "adrfam": "IPv4", 00:18:40.619 "traddr": "10.0.0.2", 00:18:40.619 "trsvcid": "4420" 00:18:40.619 }, 00:18:40.619 "peer_address": { 00:18:40.619 "trtype": "TCP", 00:18:40.619 "adrfam": "IPv4", 00:18:40.619 "traddr": "10.0.0.1", 00:18:40.619 "trsvcid": "36964" 00:18:40.619 }, 00:18:40.619 "auth": { 00:18:40.619 "state": "completed", 00:18:40.619 "digest": "sha512", 00:18:40.619 "dhgroup": "ffdhe8192" 00:18:40.619 } 00:18:40.619 } 00:18:40.619 ]' 00:18:40.619 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.619 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.619 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.619 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:40.619 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.619 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.619 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.619 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.877 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:18:40.877 19:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: --dhchap-ctrl-secret DHHC-1:02:ZDZiZTlkMDI2YTJiMDc4ZTEzMDk4NjVlZjI4MWYwYzk3MmRiYmViZTA3NzhhYzEz+4Vy8g==: 00:18:41.812 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.812 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:41.812 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.812 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.812 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.812 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.812 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:41.812 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:42.070 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:42.070 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.070 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:42.070 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:42.070 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:42.070 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.070 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.070 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.070 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.070 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.070 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.070 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.070 19:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.011 00:18:43.011 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.011 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.011 19:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.269 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.269 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.269 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.269 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.269 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.269 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.269 { 00:18:43.269 "cntlid": 141, 00:18:43.269 "qid": 0, 00:18:43.269 "state": "enabled", 00:18:43.269 "thread": "nvmf_tgt_poll_group_000", 00:18:43.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:43.269 "listen_address": { 00:18:43.269 "trtype": "TCP", 00:18:43.269 "adrfam": "IPv4", 00:18:43.269 "traddr": "10.0.0.2", 00:18:43.269 "trsvcid": "4420" 00:18:43.269 }, 00:18:43.269 "peer_address": { 00:18:43.269 "trtype": "TCP", 00:18:43.269 "adrfam": "IPv4", 00:18:43.269 "traddr": "10.0.0.1", 00:18:43.269 "trsvcid": "36992" 00:18:43.269 }, 00:18:43.269 "auth": { 00:18:43.269 "state": "completed", 00:18:43.269 "digest": "sha512", 00:18:43.269 "dhgroup": "ffdhe8192" 00:18:43.269 } 00:18:43.269 } 00:18:43.269 ]' 00:18:43.269 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.269 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.269 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.269 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:43.269 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.269 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.269 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.269 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.526 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:18:43.526 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:01:Y2IxMGJlYjc2N2IxMjk3M2U0NWQzODM0NmNlNTk5Njj2IZYk: 00:18:44.465 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.465 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:44.465 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.465 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.465 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.465 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.465 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:44.465 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:44.724 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:44.724 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.724 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:44.724 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:44.724 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:44.724 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.724 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:44.724 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.724 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.724 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.724 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:44.724 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.724 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.663 00:18:45.663 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.663 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.663 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.921 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.921 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.921 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.921 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.921 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.921 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.921 { 00:18:45.921 "cntlid": 143, 00:18:45.921 "qid": 0, 00:18:45.921 "state": "enabled", 00:18:45.921 "thread": "nvmf_tgt_poll_group_000", 00:18:45.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:45.921 "listen_address": { 00:18:45.921 "trtype": "TCP", 00:18:45.921 "adrfam": "IPv4", 00:18:45.921 "traddr": "10.0.0.2", 00:18:45.921 "trsvcid": "4420" 00:18:45.921 }, 00:18:45.921 "peer_address": { 00:18:45.921 "trtype": "TCP", 00:18:45.921 "adrfam": "IPv4", 00:18:45.921 "traddr": "10.0.0.1", 00:18:45.921 "trsvcid": "44490" 00:18:45.921 }, 00:18:45.921 "auth": { 00:18:45.921 "state": "completed", 00:18:45.921 "digest": "sha512", 00:18:45.921 "dhgroup": "ffdhe8192" 00:18:45.921 } 00:18:45.921 } 00:18:45.921 ]' 00:18:45.921 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.921 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.921 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.187 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:46.187 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.187 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.187 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.187 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.452 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:18:46.452 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:18:47.391 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.391 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:47.391 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.391 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.391 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.391 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:47.391 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:47.391 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:47.391 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:47.391 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:47.391 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:47.650 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:47.650 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.650 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:47.650 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:47.650 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:47.650 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.650 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.650 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.650 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.650 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.650 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.650 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.650 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.591 00:18:48.591 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.591 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.591 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.591 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.591 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.591 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.591 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.591 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.591 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.591 { 00:18:48.591 "cntlid": 145, 00:18:48.591 "qid": 0, 00:18:48.591 "state": "enabled", 00:18:48.591 "thread": "nvmf_tgt_poll_group_000", 00:18:48.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:48.591 "listen_address": { 00:18:48.591 "trtype": "TCP", 00:18:48.591 "adrfam": "IPv4", 00:18:48.591 "traddr": "10.0.0.2", 00:18:48.591 "trsvcid": "4420" 00:18:48.591 }, 00:18:48.591 "peer_address": { 00:18:48.591 "trtype": "TCP", 00:18:48.591 "adrfam": "IPv4", 00:18:48.591 "traddr": "10.0.0.1", 00:18:48.591 "trsvcid": "44524" 00:18:48.591 }, 00:18:48.591 "auth": { 00:18:48.591 "state": "completed", 00:18:48.591 "digest": "sha512", 00:18:48.591 "dhgroup": "ffdhe8192" 00:18:48.591 } 00:18:48.591 } 00:18:48.591 ]' 00:18:48.591 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.591 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.591 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.849 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:48.849 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.849 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.849 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.849 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.107 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:18:49.107 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:MGFmYmQ4ZDU4MTEwZjZlMjE5MjNhNzAxMTlhOWE5MzhiZmRhMjYzMjUzNjM0ZjBhcokgLQ==: --dhchap-ctrl-secret DHHC-1:03:OTgwNGRjNGNhMTcwMTI1NmI0MmZkYzRkMDM3ODIyZjFlNDFhNmM0NjQ5YTJiZTExODk2MDY2NjBjZTA4N2Q0M27rvNo=: 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:50.044 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:50.985 request: 00:18:50.985 { 00:18:50.985 "name": "nvme0", 00:18:50.985 "trtype": "tcp", 00:18:50.985 "traddr": "10.0.0.2", 00:18:50.985 "adrfam": "ipv4", 00:18:50.985 "trsvcid": "4420", 00:18:50.985 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:50.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:50.986 "prchk_reftag": false, 00:18:50.986 "prchk_guard": false, 00:18:50.986 "hdgst": false, 00:18:50.986 "ddgst": false, 00:18:50.986 "dhchap_key": "key2", 00:18:50.986 "allow_unrecognized_csi": false, 00:18:50.986 "method": "bdev_nvme_attach_controller", 00:18:50.986 "req_id": 1 00:18:50.986 } 00:18:50.986 Got JSON-RPC error response 00:18:50.986 response: 00:18:50.986 { 00:18:50.986 "code": -5, 00:18:50.986 "message": "Input/output error" 00:18:50.986 } 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:50.986 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:51.555 request: 00:18:51.555 { 00:18:51.555 "name": "nvme0", 00:18:51.555 "trtype": "tcp", 00:18:51.555 "traddr": "10.0.0.2", 00:18:51.555 "adrfam": "ipv4", 00:18:51.555 "trsvcid": "4420", 00:18:51.555 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:51.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:51.555 "prchk_reftag": false, 00:18:51.555 "prchk_guard": false, 00:18:51.555 "hdgst": false, 00:18:51.555 "ddgst": false, 00:18:51.555 "dhchap_key": "key1", 00:18:51.555 "dhchap_ctrlr_key": "ckey2", 00:18:51.555 "allow_unrecognized_csi": false, 00:18:51.555 "method": "bdev_nvme_attach_controller", 00:18:51.555 "req_id": 1 00:18:51.555 } 00:18:51.555 Got JSON-RPC error response 00:18:51.555 response: 00:18:51.555 { 00:18:51.555 "code": -5, 00:18:51.555 "message": "Input/output error" 00:18:51.555 } 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.555 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.496 request: 00:18:52.496 { 00:18:52.496 "name": "nvme0", 00:18:52.496 "trtype": "tcp", 00:18:52.496 "traddr": "10.0.0.2", 00:18:52.496 "adrfam": "ipv4", 00:18:52.496 "trsvcid": "4420", 00:18:52.496 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:52.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:52.496 "prchk_reftag": false, 00:18:52.496 "prchk_guard": false, 00:18:52.496 "hdgst": false, 00:18:52.496 "ddgst": false, 00:18:52.496 "dhchap_key": "key1", 00:18:52.496 "dhchap_ctrlr_key": "ckey1", 00:18:52.496 "allow_unrecognized_csi": false, 00:18:52.496 "method": "bdev_nvme_attach_controller", 00:18:52.496 "req_id": 1 00:18:52.496 } 00:18:52.496 Got JSON-RPC error response 00:18:52.496 response: 00:18:52.496 { 00:18:52.496 "code": -5, 00:18:52.496 "message": "Input/output error" 00:18:52.496 } 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 201158 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 201158 ']' 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 201158 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 201158 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 201158' 00:18:52.496 killing process with pid 201158 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 201158 00:18:52.496 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 201158 00:18:52.756 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:52.756 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.756 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.756 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.756 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:52.757 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=224340 00:18:52.757 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 224340 00:18:52.757 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 224340 ']' 00:18:52.757 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.757 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.757 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.757 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.757 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.015 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.016 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:53.016 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.016 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.016 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.016 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.016 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:53.016 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 224340 00:18:53.016 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 224340 ']' 00:18:53.016 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.016 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.016 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.016 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.016 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.277 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.277 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:53.277 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:53.277 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.277 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.277 null0 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4aD 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.dnP ]] 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dnP 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Avv 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.1fI ]] 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1fI 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.kLM 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.xh9 ]] 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xh9 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.kwy 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.536 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.537 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.537 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:53.537 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:53.537 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.537 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.537 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:53.537 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:53.537 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.537 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:53.537 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.537 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.537 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.537 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:53.537 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:53.537 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.918 nvme0n1 00:18:54.918 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.918 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.918 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.177 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.177 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.177 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.177 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.177 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.177 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.177 { 00:18:55.177 "cntlid": 1, 00:18:55.177 "qid": 0, 00:18:55.177 "state": "enabled", 00:18:55.177 "thread": "nvmf_tgt_poll_group_000", 00:18:55.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:55.177 "listen_address": { 00:18:55.177 "trtype": "TCP", 00:18:55.177 "adrfam": "IPv4", 00:18:55.177 "traddr": "10.0.0.2", 00:18:55.177 "trsvcid": "4420" 00:18:55.177 }, 00:18:55.177 "peer_address": { 00:18:55.177 "trtype": "TCP", 00:18:55.177 "adrfam": "IPv4", 00:18:55.177 "traddr": "10.0.0.1", 00:18:55.177 "trsvcid": "44560" 00:18:55.177 }, 00:18:55.177 "auth": { 00:18:55.177 "state": "completed", 00:18:55.177 "digest": "sha512", 00:18:55.177 "dhgroup": "ffdhe8192" 00:18:55.177 } 00:18:55.177 } 00:18:55.177 ]' 00:18:55.177 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.177 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.177 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.177 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:55.177 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.177 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.177 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.177 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.436 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:18:55.436 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:18:56.371 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.371 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:56.371 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.371 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.371 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.371 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:56.371 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.371 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.371 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.371 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:56.371 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:56.630 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:56.630 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:56.630 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:56.630 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:56.630 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.630 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:56.630 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.630 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:56.630 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.630 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.198 request: 00:18:57.198 { 00:18:57.198 "name": "nvme0", 00:18:57.198 "trtype": "tcp", 00:18:57.198 "traddr": "10.0.0.2", 00:18:57.198 "adrfam": "ipv4", 00:18:57.198 "trsvcid": "4420", 00:18:57.198 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:57.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:57.198 "prchk_reftag": false, 00:18:57.198 "prchk_guard": false, 00:18:57.198 "hdgst": false, 00:18:57.198 "ddgst": false, 00:18:57.198 "dhchap_key": "key3", 00:18:57.198 "allow_unrecognized_csi": false, 00:18:57.198 "method": "bdev_nvme_attach_controller", 00:18:57.198 "req_id": 1 00:18:57.198 } 00:18:57.198 Got JSON-RPC error response 00:18:57.198 response: 00:18:57.198 { 00:18:57.198 "code": -5, 00:18:57.198 "message": "Input/output error" 00:18:57.198 } 00:18:57.198 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:57.198 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:57.198 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:57.198 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:57.198 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:57.198 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:57.198 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:57.198 19:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:57.456 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:57.456 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:57.456 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:57.456 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:57.456 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.456 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:57.456 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.456 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:57.456 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.456 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.715 request: 00:18:57.715 { 00:18:57.715 "name": "nvme0", 00:18:57.715 "trtype": "tcp", 00:18:57.715 "traddr": "10.0.0.2", 00:18:57.715 "adrfam": "ipv4", 00:18:57.715 "trsvcid": "4420", 00:18:57.715 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:57.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:57.715 "prchk_reftag": false, 00:18:57.715 "prchk_guard": false, 00:18:57.715 "hdgst": false, 00:18:57.715 "ddgst": false, 00:18:57.715 "dhchap_key": "key3", 00:18:57.715 "allow_unrecognized_csi": false, 00:18:57.715 "method": "bdev_nvme_attach_controller", 00:18:57.715 "req_id": 1 00:18:57.715 } 00:18:57.715 Got JSON-RPC error response 00:18:57.715 response: 00:18:57.715 { 00:18:57.715 "code": -5, 00:18:57.715 "message": "Input/output error" 00:18:57.715 } 00:18:57.715 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:57.715 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:57.715 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:57.715 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:57.715 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:57.715 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:57.715 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:57.715 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:57.715 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:57.715 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:57.973 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:58.542 request: 00:18:58.542 { 00:18:58.542 "name": "nvme0", 00:18:58.542 "trtype": "tcp", 00:18:58.542 "traddr": "10.0.0.2", 00:18:58.542 "adrfam": "ipv4", 00:18:58.542 "trsvcid": "4420", 00:18:58.542 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:58.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:58.542 "prchk_reftag": false, 00:18:58.542 "prchk_guard": false, 00:18:58.542 "hdgst": false, 00:18:58.542 "ddgst": false, 00:18:58.542 "dhchap_key": "key0", 00:18:58.542 "dhchap_ctrlr_key": "key1", 00:18:58.542 "allow_unrecognized_csi": false, 00:18:58.542 "method": "bdev_nvme_attach_controller", 00:18:58.542 "req_id": 1 00:18:58.542 } 00:18:58.542 Got JSON-RPC error response 00:18:58.542 response: 00:18:58.542 { 00:18:58.542 "code": -5, 00:18:58.542 "message": "Input/output error" 00:18:58.542 } 00:18:58.542 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:58.542 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:58.542 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:58.542 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:58.542 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:58.542 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:58.542 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:58.801 nvme0n1 00:18:58.801 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:58.801 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:58.801 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.059 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.059 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.059 19:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.318 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:18:59.318 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.318 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.318 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.318 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:59.318 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:59.318 19:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:00.720 nvme0n1 00:19:00.720 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:00.720 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.720 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:00.978 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.978 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:00.978 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.978 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.978 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.978 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:00.978 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:00.978 19:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.236 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.236 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:19:01.236 19:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: --dhchap-ctrl-secret DHHC-1:03:YmEyMDJhNWIxMmY5YjNhNjM5ODA3MTkyNzkxNDk0MjZkODlmZDdjYWM0ZGU3YWE4OWQyMzhhNDgyOGVmYTgyZuRK+hI=: 00:19:02.170 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:02.170 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:02.170 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:02.170 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:02.170 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:02.170 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:02.170 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:02.170 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.170 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.429 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:02.429 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:02.429 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:02.429 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:02.429 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.429 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:02.429 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.429 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:02.429 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:02.429 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:03.376 request: 00:19:03.376 { 00:19:03.376 "name": "nvme0", 00:19:03.376 "trtype": "tcp", 00:19:03.376 "traddr": "10.0.0.2", 00:19:03.376 "adrfam": "ipv4", 00:19:03.376 "trsvcid": "4420", 00:19:03.376 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:03.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:03.376 "prchk_reftag": false, 00:19:03.376 "prchk_guard": false, 00:19:03.376 "hdgst": false, 00:19:03.376 "ddgst": false, 00:19:03.376 "dhchap_key": "key1", 00:19:03.377 "allow_unrecognized_csi": false, 00:19:03.377 "method": "bdev_nvme_attach_controller", 00:19:03.377 "req_id": 1 00:19:03.377 } 00:19:03.377 Got JSON-RPC error response 00:19:03.377 response: 00:19:03.377 { 00:19:03.377 "code": -5, 00:19:03.377 "message": "Input/output error" 00:19:03.377 } 00:19:03.377 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:03.377 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:03.377 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:03.377 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:03.377 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:03.377 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:03.377 19:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:04.756 nvme0n1 00:19:04.756 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:04.756 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:04.756 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.756 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.756 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.756 19:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.014 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:05.014 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.014 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.014 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.014 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:05.014 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:05.014 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:05.580 nvme0n1 00:19:05.581 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:05.581 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:05.581 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.839 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.839 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.839 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.098 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:06.098 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.098 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.098 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.098 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: '' 2s 00:19:06.098 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:06.098 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:06.098 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: 00:19:06.098 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:06.098 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:06.098 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:06.098 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: ]] 00:19:06.098 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NWU5N2FkZTcwODE4YjA4NGFhMDAwNDI3NjE2OGNmZDlxffnd: 00:19:06.098 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:06.098 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:06.098 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:08.022 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:08.022 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:08.022 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:08.022 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:08.022 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:08.022 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:08.022 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:08.022 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:08.022 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.022 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.022 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.022 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: 2s 00:19:08.022 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:08.023 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:08.023 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:08.023 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: 00:19:08.023 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:08.023 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:08.023 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:08.023 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: ]] 00:19:08.023 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDI4YmRlMDUxYzkyNGQxZTNlYmViNTE5NWM3MDAwNzUyZmUzZmMyZGNhYTZjYTk3rJuKkQ==: 00:19:08.023 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:08.023 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:10.553 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:10.553 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:10.553 19:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:10.553 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:10.553 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:10.553 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:10.553 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:10.553 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.553 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:10.553 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.553 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.553 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.553 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:10.554 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:10.554 19:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:11.486 nvme0n1 00:19:11.486 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:11.486 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.486 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.486 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.486 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:11.486 19:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:12.420 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:12.420 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:12.420 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.678 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.678 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:12.678 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.678 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.678 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.678 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:12.678 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:12.936 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:12.936 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:12.936 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.195 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.195 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:13.195 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.195 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.195 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.195 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:13.195 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:13.195 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:13.195 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:13.195 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.195 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:13.195 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.195 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:13.195 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:14.129 request: 00:19:14.129 { 00:19:14.129 "name": "nvme0", 00:19:14.129 "dhchap_key": "key1", 00:19:14.129 "dhchap_ctrlr_key": "key3", 00:19:14.129 "method": "bdev_nvme_set_keys", 00:19:14.129 "req_id": 1 00:19:14.129 } 00:19:14.129 Got JSON-RPC error response 00:19:14.129 response: 00:19:14.129 { 00:19:14.129 "code": -13, 00:19:14.129 "message": "Permission denied" 00:19:14.129 } 00:19:14.129 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:14.129 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:14.129 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:14.129 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:14.129 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:14.129 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:14.129 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.387 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:14.387 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:15.322 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:15.322 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:15.322 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.580 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:15.580 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:15.580 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.580 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.580 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.580 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:15.580 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:15.580 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:16.956 nvme0n1 00:19:16.956 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:16.956 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.956 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.956 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.956 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:16.956 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:16.956 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:16.956 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:16.956 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.956 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:16.956 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.956 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:16.956 19:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:17.888 request: 00:19:17.888 { 00:19:17.888 "name": "nvme0", 00:19:17.888 "dhchap_key": "key2", 00:19:17.888 "dhchap_ctrlr_key": "key0", 00:19:17.888 "method": "bdev_nvme_set_keys", 00:19:17.888 "req_id": 1 00:19:17.888 } 00:19:17.888 Got JSON-RPC error response 00:19:17.888 response: 00:19:17.888 { 00:19:17.888 "code": -13, 00:19:17.888 "message": "Permission denied" 00:19:17.888 } 00:19:17.888 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:17.888 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:17.888 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:17.888 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:17.888 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:17.888 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:17.888 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.147 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:18.147 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:19.108 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:19.108 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:19.108 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.365 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:19.365 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:19.365 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:19.365 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 201189 00:19:19.365 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 201189 ']' 00:19:19.365 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 201189 00:19:19.365 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:19.365 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.365 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 201189 00:19:19.365 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:19.365 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:19.365 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 201189' 00:19:19.365 killing process with pid 201189 00:19:19.365 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 201189 00:19:19.365 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 201189 00:19:19.929 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:19.929 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:19.929 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:19.929 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:19.929 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:19.929 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:19.929 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:19.929 rmmod nvme_tcp 00:19:19.930 rmmod nvme_fabrics 00:19:19.930 rmmod nvme_keyring 00:19:19.930 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:19.930 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:19.930 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:19.930 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 224340 ']' 00:19:19.930 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 224340 00:19:19.930 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 224340 ']' 00:19:19.930 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 224340 00:19:19.930 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:19.930 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.930 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 224340 00:19:19.930 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:19.930 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:19.930 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 224340' 00:19:19.930 killing process with pid 224340 00:19:19.930 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 224340 00:19:19.930 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 224340 00:19:20.190 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:20.190 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:20.190 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:20.190 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:20.190 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:20.190 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:20.190 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:20.190 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:20.190 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:20.190 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.190 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.190 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.097 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:22.097 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.4aD /tmp/spdk.key-sha256.Avv /tmp/spdk.key-sha384.kLM /tmp/spdk.key-sha512.kwy /tmp/spdk.key-sha512.dnP /tmp/spdk.key-sha384.1fI /tmp/spdk.key-sha256.xh9 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:22.097 00:19:22.097 real 3m33.118s 00:19:22.097 user 8m18.966s 00:19:22.097 sys 0m27.999s 00:19:22.097 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.097 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.097 ************************************ 00:19:22.097 END TEST nvmf_auth_target 00:19:22.097 ************************************ 00:19:22.097 19:18:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:22.097 19:18:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:22.097 19:18:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:22.097 19:18:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.097 19:18:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:22.097 ************************************ 00:19:22.097 START TEST nvmf_bdevio_no_huge 00:19:22.097 ************************************ 00:19:22.097 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:22.356 * Looking for test storage... 00:19:22.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:22.356 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:22.356 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:19:22.356 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:22.356 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:22.356 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:22.356 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:22.356 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:22.356 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:22.356 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:22.356 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:22.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.357 --rc genhtml_branch_coverage=1 00:19:22.357 --rc genhtml_function_coverage=1 00:19:22.357 --rc genhtml_legend=1 00:19:22.357 --rc geninfo_all_blocks=1 00:19:22.357 --rc geninfo_unexecuted_blocks=1 00:19:22.357 00:19:22.357 ' 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:22.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.357 --rc genhtml_branch_coverage=1 00:19:22.357 --rc genhtml_function_coverage=1 00:19:22.357 --rc genhtml_legend=1 00:19:22.357 --rc geninfo_all_blocks=1 00:19:22.357 --rc geninfo_unexecuted_blocks=1 00:19:22.357 00:19:22.357 ' 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:22.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.357 --rc genhtml_branch_coverage=1 00:19:22.357 --rc genhtml_function_coverage=1 00:19:22.357 --rc genhtml_legend=1 00:19:22.357 --rc geninfo_all_blocks=1 00:19:22.357 --rc geninfo_unexecuted_blocks=1 00:19:22.357 00:19:22.357 ' 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:22.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.357 --rc genhtml_branch_coverage=1 00:19:22.357 --rc genhtml_function_coverage=1 00:19:22.357 --rc genhtml_legend=1 00:19:22.357 --rc geninfo_all_blocks=1 00:19:22.357 --rc geninfo_unexecuted_blocks=1 00:19:22.357 00:19:22.357 ' 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:22.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:22.357 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.358 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.358 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.358 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:22.358 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:22.358 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:22.358 19:18:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:24.892 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:24.892 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:24.892 Found net devices under 0000:84:00.0: cvl_0_0 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:24.892 Found net devices under 0000:84:00.1: cvl_0_1 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.892 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:24.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:19:24.893 00:19:24.893 --- 10.0.0.2 ping statistics --- 00:19:24.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.893 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:19:24.893 00:19:24.893 --- 10.0.0.1 ping statistics --- 00:19:24.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.893 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=230221 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 230221 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 230221 ']' 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.893 [2024-12-06 19:18:09.565485] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:19:24.893 [2024-12-06 19:18:09.565561] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:24.893 [2024-12-06 19:18:09.644565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:24.893 [2024-12-06 19:18:09.702085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.893 [2024-12-06 19:18:09.702150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.893 [2024-12-06 19:18:09.702174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.893 [2024-12-06 19:18:09.702184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.893 [2024-12-06 19:18:09.702194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.893 [2024-12-06 19:18:09.703284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:24.893 [2024-12-06 19:18:09.703344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:24.893 [2024-12-06 19:18:09.703421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:24.893 [2024-12-06 19:18:09.703425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.893 [2024-12-06 19:18:09.860868] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.893 Malloc0 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:24.893 [2024-12-06 19:18:09.899345] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:24.893 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:24.893 { 00:19:24.893 "params": { 00:19:24.894 "name": "Nvme$subsystem", 00:19:24.894 "trtype": "$TEST_TRANSPORT", 00:19:24.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.894 "adrfam": "ipv4", 00:19:24.894 "trsvcid": "$NVMF_PORT", 00:19:24.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.894 "hdgst": ${hdgst:-false}, 00:19:24.894 "ddgst": ${ddgst:-false} 00:19:24.894 }, 00:19:24.894 "method": "bdev_nvme_attach_controller" 00:19:24.894 } 00:19:24.894 EOF 00:19:24.894 )") 00:19:24.894 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:24.894 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:24.894 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:24.894 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:24.894 "params": { 00:19:24.894 "name": "Nvme1", 00:19:24.894 "trtype": "tcp", 00:19:24.894 "traddr": "10.0.0.2", 00:19:24.894 "adrfam": "ipv4", 00:19:24.894 "trsvcid": "4420", 00:19:24.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.894 "hdgst": false, 00:19:24.894 "ddgst": false 00:19:24.894 }, 00:19:24.894 "method": "bdev_nvme_attach_controller" 00:19:24.894 }' 00:19:25.154 [2024-12-06 19:18:09.949647] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:19:25.154 [2024-12-06 19:18:09.949750] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid230254 ] 00:19:25.154 [2024-12-06 19:18:10.022651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:25.154 [2024-12-06 19:18:10.088534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.154 [2024-12-06 19:18:10.088584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.154 [2024-12-06 19:18:10.088588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.413 I/O targets: 00:19:25.413 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:25.413 00:19:25.413 00:19:25.413 CUnit - A unit testing framework for C - Version 2.1-3 00:19:25.413 http://cunit.sourceforge.net/ 00:19:25.413 00:19:25.413 00:19:25.413 Suite: bdevio tests on: Nvme1n1 00:19:25.670 Test: blockdev write read block ...passed 00:19:25.670 Test: blockdev write zeroes read block ...passed 00:19:25.670 Test: blockdev write zeroes read no split ...passed 00:19:25.670 Test: blockdev write zeroes read split ...passed 00:19:25.670 Test: blockdev write zeroes read split partial ...passed 00:19:25.670 Test: blockdev reset ...[2024-12-06 19:18:10.648207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:25.670 [2024-12-06 19:18:10.648321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d719f0 (9): Bad file descriptor 00:19:25.670 [2024-12-06 19:18:10.663719] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:25.670 passed 00:19:25.670 Test: blockdev write read 8 blocks ...passed 00:19:25.670 Test: blockdev write read size > 128k ...passed 00:19:25.670 Test: blockdev write read invalid size ...passed 00:19:25.670 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:25.670 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:25.670 Test: blockdev write read max offset ...passed 00:19:25.928 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:25.928 Test: blockdev writev readv 8 blocks ...passed 00:19:25.928 Test: blockdev writev readv 30 x 1block ...passed 00:19:25.928 Test: blockdev writev readv block ...passed 00:19:25.928 Test: blockdev writev readv size > 128k ...passed 00:19:25.928 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:25.928 Test: blockdev comparev and writev ...[2024-12-06 19:18:10.875398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.928 [2024-12-06 19:18:10.875435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.928 [2024-12-06 19:18:10.875460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.928 [2024-12-06 19:18:10.875477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.928 [2024-12-06 19:18:10.875873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.928 [2024-12-06 19:18:10.875898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.928 [2024-12-06 19:18:10.875921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.928 [2024-12-06 19:18:10.875938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.928 [2024-12-06 19:18:10.876315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.928 [2024-12-06 19:18:10.876339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.928 [2024-12-06 19:18:10.876360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.928 [2024-12-06 19:18:10.876376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.928 [2024-12-06 19:18:10.876746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.928 [2024-12-06 19:18:10.876771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.928 [2024-12-06 19:18:10.876792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:25.928 [2024-12-06 19:18:10.876808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.928 passed 00:19:25.928 Test: blockdev nvme passthru rw ...passed 00:19:25.928 Test: blockdev nvme passthru vendor specific ...[2024-12-06 19:18:10.959024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.928 [2024-12-06 19:18:10.959054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.928 [2024-12-06 19:18:10.959206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.928 [2024-12-06 19:18:10.959228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.928 [2024-12-06 19:18:10.959361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.928 [2024-12-06 19:18:10.959382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.928 [2024-12-06 19:18:10.959519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:25.928 [2024-12-06 19:18:10.959541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.928 passed 00:19:26.186 Test: blockdev nvme admin passthru ...passed 00:19:26.186 Test: blockdev copy ...passed 00:19:26.186 00:19:26.186 Run Summary: Type Total Ran Passed Failed Inactive 00:19:26.186 suites 1 1 n/a 0 0 00:19:26.186 tests 23 23 23 0 0 00:19:26.186 asserts 152 152 152 0 n/a 00:19:26.186 00:19:26.186 Elapsed time = 1.153 seconds 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:26.446 rmmod nvme_tcp 00:19:26.446 rmmod nvme_fabrics 00:19:26.446 rmmod nvme_keyring 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 230221 ']' 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 230221 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 230221 ']' 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 230221 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 230221 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 230221' 00:19:26.446 killing process with pid 230221 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 230221 00:19:26.446 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 230221 00:19:27.019 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:27.019 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:27.019 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:27.019 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:27.019 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:27.019 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:27.019 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:27.019 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:27.019 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:27.019 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.019 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.019 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.931 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:28.931 00:19:28.931 real 0m6.770s 00:19:28.931 user 0m11.664s 00:19:28.931 sys 0m2.602s 00:19:28.931 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.931 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:28.931 ************************************ 00:19:28.931 END TEST nvmf_bdevio_no_huge 00:19:28.931 ************************************ 00:19:28.931 19:18:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:28.931 19:18:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:28.931 19:18:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.931 19:18:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:28.931 ************************************ 00:19:28.931 START TEST nvmf_tls 00:19:28.931 ************************************ 00:19:28.931 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:29.191 * Looking for test storage... 00:19:29.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:29.191 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:29.191 19:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:29.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.191 --rc genhtml_branch_coverage=1 00:19:29.191 --rc genhtml_function_coverage=1 00:19:29.191 --rc genhtml_legend=1 00:19:29.191 --rc geninfo_all_blocks=1 00:19:29.191 --rc geninfo_unexecuted_blocks=1 00:19:29.191 00:19:29.191 ' 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:29.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.191 --rc genhtml_branch_coverage=1 00:19:29.191 --rc genhtml_function_coverage=1 00:19:29.191 --rc genhtml_legend=1 00:19:29.191 --rc geninfo_all_blocks=1 00:19:29.191 --rc geninfo_unexecuted_blocks=1 00:19:29.191 00:19:29.191 ' 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:29.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.191 --rc genhtml_branch_coverage=1 00:19:29.191 --rc genhtml_function_coverage=1 00:19:29.191 --rc genhtml_legend=1 00:19:29.191 --rc geninfo_all_blocks=1 00:19:29.191 --rc geninfo_unexecuted_blocks=1 00:19:29.191 00:19:29.191 ' 00:19:29.191 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:29.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.191 --rc genhtml_branch_coverage=1 00:19:29.191 --rc genhtml_function_coverage=1 00:19:29.191 --rc genhtml_legend=1 00:19:29.191 --rc geninfo_all_blocks=1 00:19:29.191 --rc geninfo_unexecuted_blocks=1 00:19:29.191 00:19:29.192 ' 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:29.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:29.192 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:31.733 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:31.733 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.733 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:31.734 Found net devices under 0000:84:00.0: cvl_0_0 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:31.734 Found net devices under 0000:84:00.1: cvl_0_1 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:31.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:19:31.734 00:19:31.734 --- 10.0.0.2 ping statistics --- 00:19:31.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.734 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:31.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:19:31.734 00:19:31.734 --- 10.0.0.1 ping statistics --- 00:19:31.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.734 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=232490 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 232490 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 232490 ']' 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.734 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.734 [2024-12-06 19:18:16.471187] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:19:31.735 [2024-12-06 19:18:16.471268] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.735 [2024-12-06 19:18:16.544160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.735 [2024-12-06 19:18:16.596681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.735 [2024-12-06 19:18:16.596761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.735 [2024-12-06 19:18:16.596790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.735 [2024-12-06 19:18:16.596808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.735 [2024-12-06 19:18:16.596822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.735 [2024-12-06 19:18:16.597459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.735 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.735 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:31.735 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:31.735 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.735 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.735 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.735 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:31.735 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:31.993 true 00:19:31.993 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:31.993 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:32.251 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:32.251 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:32.251 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:32.510 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:32.510 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:32.770 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:32.770 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:32.770 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:33.029 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:33.029 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:33.600 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:33.600 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:33.600 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:33.600 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:33.600 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:33.600 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:33.600 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:33.860 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:33.860 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:34.119 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:34.119 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:34.119 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:34.690 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:34.690 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:34.690 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:34.690 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:34.690 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:34.690 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:34.690 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.690 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:34.690 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:34.690 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:34.690 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.ruUML2lgVE 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.6h9DV6vWdV 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ruUML2lgVE 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.6h9DV6vWdV 00:19:34.950 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:35.211 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:35.470 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.ruUML2lgVE 00:19:35.470 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ruUML2lgVE 00:19:35.470 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:35.727 [2024-12-06 19:18:20.763276] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.986 19:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:36.247 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:36.505 [2024-12-06 19:18:21.304749] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.505 [2024-12-06 19:18:21.305108] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.505 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:36.765 malloc0 00:19:36.765 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:37.026 19:18:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ruUML2lgVE 00:19:37.284 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:37.544 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ruUML2lgVE 00:19:47.548 Initializing NVMe Controllers 00:19:47.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:47.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:47.548 Initialization complete. Launching workers. 00:19:47.548 ======================================================== 00:19:47.548 Latency(us) 00:19:47.548 Device Information : IOPS MiB/s Average min max 00:19:47.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8588.37 33.55 7453.22 1129.05 9186.31 00:19:47.548 ======================================================== 00:19:47.548 Total : 8588.37 33.55 7453.22 1129.05 9186.31 00:19:47.548 00:19:47.548 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ruUML2lgVE 00:19:47.548 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:47.548 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:47.548 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:47.548 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ruUML2lgVE 00:19:47.548 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:47.548 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=234391 00:19:47.548 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:47.548 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:47.807 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 234391 /var/tmp/bdevperf.sock 00:19:47.807 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 234391 ']' 00:19:47.807 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.807 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.807 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.807 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.807 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.807 [2024-12-06 19:18:32.639574] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:19:47.807 [2024-12-06 19:18:32.639653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234391 ] 00:19:47.807 [2024-12-06 19:18:32.704972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.807 [2024-12-06 19:18:32.761653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.067 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.067 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:48.067 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ruUML2lgVE 00:19:48.326 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.584 [2024-12-06 19:18:33.525655] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:48.584 TLSTESTn1 00:19:48.584 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:48.842 Running I/O for 10 seconds... 00:19:50.718 3270.00 IOPS, 12.77 MiB/s [2024-12-06T18:18:37.147Z] 3330.50 IOPS, 13.01 MiB/s [2024-12-06T18:18:38.084Z] 3400.67 IOPS, 13.28 MiB/s [2024-12-06T18:18:39.045Z] 3405.00 IOPS, 13.30 MiB/s [2024-12-06T18:18:39.982Z] 3395.00 IOPS, 13.26 MiB/s [2024-12-06T18:18:40.917Z] 3400.00 IOPS, 13.28 MiB/s [2024-12-06T18:18:41.857Z] 3399.86 IOPS, 13.28 MiB/s [2024-12-06T18:18:42.819Z] 3410.75 IOPS, 13.32 MiB/s [2024-12-06T18:18:43.762Z] 3408.11 IOPS, 13.31 MiB/s [2024-12-06T18:18:44.023Z] 3403.90 IOPS, 13.30 MiB/s 00:19:58.974 Latency(us) 00:19:58.974 [2024-12-06T18:18:44.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.974 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:58.974 Verification LBA range: start 0x0 length 0x2000 00:19:58.974 TLSTESTn1 : 10.03 3407.14 13.31 0.00 0.00 37492.97 5873.97 38447.79 00:19:58.974 [2024-12-06T18:18:44.023Z] =================================================================================================================== 00:19:58.974 [2024-12-06T18:18:44.023Z] Total : 3407.14 13.31 0.00 0.00 37492.97 5873.97 38447.79 00:19:58.974 { 00:19:58.974 "results": [ 00:19:58.974 { 00:19:58.974 "job": "TLSTESTn1", 00:19:58.974 "core_mask": "0x4", 00:19:58.974 "workload": "verify", 00:19:58.974 "status": "finished", 00:19:58.974 "verify_range": { 00:19:58.974 "start": 0, 00:19:58.974 "length": 8192 00:19:58.974 }, 00:19:58.974 "queue_depth": 128, 00:19:58.974 "io_size": 4096, 00:19:58.974 "runtime": 10.027758, 00:19:58.974 "iops": 3407.142453976253, 00:19:58.974 "mibps": 13.309150210844738, 00:19:58.974 "io_failed": 0, 00:19:58.974 "io_timeout": 0, 00:19:58.974 "avg_latency_us": 37492.97387456883, 00:19:58.974 "min_latency_us": 5873.967407407407, 00:19:58.974 "max_latency_us": 38447.78666666667 00:19:58.974 } 00:19:58.974 ], 00:19:58.974 "core_count": 1 00:19:58.974 } 00:19:58.974 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:58.974 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 234391 00:19:58.974 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 234391 ']' 00:19:58.974 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 234391 00:19:58.974 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:58.974 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.974 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 234391 00:19:58.974 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:58.974 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:58.974 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 234391' 00:19:58.974 killing process with pid 234391 00:19:58.974 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 234391 00:19:58.974 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.974 00:19:58.974 Latency(us) 00:19:58.974 [2024-12-06T18:18:44.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.974 [2024-12-06T18:18:44.023Z] =================================================================================================================== 00:19:58.974 [2024-12-06T18:18:44.023Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:58.974 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 234391 00:19:59.234 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6h9DV6vWdV 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6h9DV6vWdV 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6h9DV6vWdV 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6h9DV6vWdV 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=235713 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 235713 /var/tmp/bdevperf.sock 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 235713 ']' 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.235 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.235 [2024-12-06 19:18:44.104363] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:19:59.235 [2024-12-06 19:18:44.104434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235713 ] 00:19:59.235 [2024-12-06 19:18:44.169438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.235 [2024-12-06 19:18:44.223261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.493 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.493 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:59.493 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6h9DV6vWdV 00:19:59.751 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:00.010 [2024-12-06 19:18:44.905951] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:00.010 [2024-12-06 19:18:44.914986] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:00.010 [2024-12-06 19:18:44.915090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1550580 (107): Transport endpoint is not connected 00:20:00.010 [2024-12-06 19:18:44.916063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1550580 (9): Bad file descriptor 00:20:00.010 [2024-12-06 19:18:44.917064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:00.010 [2024-12-06 19:18:44.917083] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:00.010 [2024-12-06 19:18:44.917096] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:00.010 [2024-12-06 19:18:44.917115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:00.010 request: 00:20:00.010 { 00:20:00.010 "name": "TLSTEST", 00:20:00.010 "trtype": "tcp", 00:20:00.010 "traddr": "10.0.0.2", 00:20:00.010 "adrfam": "ipv4", 00:20:00.010 "trsvcid": "4420", 00:20:00.010 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.010 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.010 "prchk_reftag": false, 00:20:00.010 "prchk_guard": false, 00:20:00.010 "hdgst": false, 00:20:00.010 "ddgst": false, 00:20:00.010 "psk": "key0", 00:20:00.010 "allow_unrecognized_csi": false, 00:20:00.010 "method": "bdev_nvme_attach_controller", 00:20:00.010 "req_id": 1 00:20:00.010 } 00:20:00.010 Got JSON-RPC error response 00:20:00.010 response: 00:20:00.010 { 00:20:00.010 "code": -5, 00:20:00.010 "message": "Input/output error" 00:20:00.010 } 00:20:00.010 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 235713 00:20:00.010 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 235713 ']' 00:20:00.010 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 235713 00:20:00.010 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:00.010 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.010 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 235713 00:20:00.010 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:00.010 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:00.010 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 235713' 00:20:00.010 killing process with pid 235713 00:20:00.010 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 235713 00:20:00.010 Received shutdown signal, test time was about 10.000000 seconds 00:20:00.010 00:20:00.010 Latency(us) 00:20:00.010 [2024-12-06T18:18:45.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.010 [2024-12-06T18:18:45.059Z] =================================================================================================================== 00:20:00.010 [2024-12-06T18:18:45.059Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:00.010 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 235713 00:20:00.267 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:00.267 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:00.267 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:00.267 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:00.267 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ruUML2lgVE 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ruUML2lgVE 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ruUML2lgVE 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ruUML2lgVE 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=235854 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 235854 /var/tmp/bdevperf.sock 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 235854 ']' 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.268 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.268 [2024-12-06 19:18:45.214570] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:00.268 [2024-12-06 19:18:45.214657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235854 ] 00:20:00.268 [2024-12-06 19:18:45.282196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.525 [2024-12-06 19:18:45.340596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.525 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.525 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:00.525 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ruUML2lgVE 00:20:00.783 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:01.043 [2024-12-06 19:18:45.960039] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.043 [2024-12-06 19:18:45.967621] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:01.043 [2024-12-06 19:18:45.967654] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:01.043 [2024-12-06 19:18:45.967691] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:01.043 [2024-12-06 19:18:45.968388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66f580 (107): Transport endpoint is not connected 00:20:01.043 [2024-12-06 19:18:45.969378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66f580 (9): Bad file descriptor 00:20:01.043 [2024-12-06 19:18:45.970379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:01.043 [2024-12-06 19:18:45.970399] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:01.043 [2024-12-06 19:18:45.970413] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:01.043 [2024-12-06 19:18:45.970431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:01.043 request: 00:20:01.043 { 00:20:01.043 "name": "TLSTEST", 00:20:01.043 "trtype": "tcp", 00:20:01.043 "traddr": "10.0.0.2", 00:20:01.043 "adrfam": "ipv4", 00:20:01.043 "trsvcid": "4420", 00:20:01.043 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.043 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:01.043 "prchk_reftag": false, 00:20:01.043 "prchk_guard": false, 00:20:01.043 "hdgst": false, 00:20:01.043 "ddgst": false, 00:20:01.043 "psk": "key0", 00:20:01.043 "allow_unrecognized_csi": false, 00:20:01.043 "method": "bdev_nvme_attach_controller", 00:20:01.043 "req_id": 1 00:20:01.043 } 00:20:01.043 Got JSON-RPC error response 00:20:01.043 response: 00:20:01.043 { 00:20:01.043 "code": -5, 00:20:01.043 "message": "Input/output error" 00:20:01.043 } 00:20:01.043 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 235854 00:20:01.043 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 235854 ']' 00:20:01.043 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 235854 00:20:01.043 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:01.043 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.043 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 235854 00:20:01.043 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:01.043 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:01.043 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 235854' 00:20:01.043 killing process with pid 235854 00:20:01.043 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 235854 00:20:01.043 Received shutdown signal, test time was about 10.000000 seconds 00:20:01.043 00:20:01.043 Latency(us) 00:20:01.043 [2024-12-06T18:18:46.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.043 [2024-12-06T18:18:46.092Z] =================================================================================================================== 00:20:01.043 [2024-12-06T18:18:46.092Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:01.043 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 235854 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ruUML2lgVE 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ruUML2lgVE 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ruUML2lgVE 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ruUML2lgVE 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=235994 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 235994 /var/tmp/bdevperf.sock 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 235994 ']' 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.304 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.304 [2024-12-06 19:18:46.287156] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:01.304 [2024-12-06 19:18:46.287245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235994 ] 00:20:01.562 [2024-12-06 19:18:46.353648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.562 [2024-12-06 19:18:46.407962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.562 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.562 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:01.562 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ruUML2lgVE 00:20:01.823 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:02.083 [2024-12-06 19:18:47.042577] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.083 [2024-12-06 19:18:47.050385] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:02.083 [2024-12-06 19:18:47.050416] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:02.083 [2024-12-06 19:18:47.050461] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:02.083 [2024-12-06 19:18:47.050716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b60580 (107): Transport endpoint is not connected 00:20:02.083 [2024-12-06 19:18:47.051692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b60580 (9): Bad file descriptor 00:20:02.083 [2024-12-06 19:18:47.052692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:02.083 [2024-12-06 19:18:47.052733] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:02.083 [2024-12-06 19:18:47.052748] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:02.083 [2024-12-06 19:18:47.052767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:02.083 request: 00:20:02.083 { 00:20:02.083 "name": "TLSTEST", 00:20:02.083 "trtype": "tcp", 00:20:02.083 "traddr": "10.0.0.2", 00:20:02.083 "adrfam": "ipv4", 00:20:02.083 "trsvcid": "4420", 00:20:02.083 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:02.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.083 "prchk_reftag": false, 00:20:02.083 "prchk_guard": false, 00:20:02.083 "hdgst": false, 00:20:02.083 "ddgst": false, 00:20:02.083 "psk": "key0", 00:20:02.083 "allow_unrecognized_csi": false, 00:20:02.083 "method": "bdev_nvme_attach_controller", 00:20:02.083 "req_id": 1 00:20:02.083 } 00:20:02.083 Got JSON-RPC error response 00:20:02.083 response: 00:20:02.083 { 00:20:02.083 "code": -5, 00:20:02.083 "message": "Input/output error" 00:20:02.083 } 00:20:02.083 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 235994 00:20:02.084 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 235994 ']' 00:20:02.084 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 235994 00:20:02.084 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:02.084 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.084 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 235994 00:20:02.084 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:02.084 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:02.084 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 235994' 00:20:02.084 killing process with pid 235994 00:20:02.084 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 235994 00:20:02.084 Received shutdown signal, test time was about 10.000000 seconds 00:20:02.084 00:20:02.084 Latency(us) 00:20:02.084 [2024-12-06T18:18:47.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.084 [2024-12-06T18:18:47.133Z] =================================================================================================================== 00:20:02.084 [2024-12-06T18:18:47.133Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:02.084 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 235994 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=236134 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 236134 /var/tmp/bdevperf.sock 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 236134 ']' 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.342 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.343 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.343 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.343 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.343 [2024-12-06 19:18:47.361252] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:02.343 [2024-12-06 19:18:47.361341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236134 ] 00:20:02.601 [2024-12-06 19:18:47.428063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.601 [2024-12-06 19:18:47.481595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.601 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.601 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:02.601 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:02.859 [2024-12-06 19:18:47.854404] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:02.859 [2024-12-06 19:18:47.854454] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:02.859 request: 00:20:02.859 { 00:20:02.859 "name": "key0", 00:20:02.859 "path": "", 00:20:02.859 "method": "keyring_file_add_key", 00:20:02.859 "req_id": 1 00:20:02.859 } 00:20:02.859 Got JSON-RPC error response 00:20:02.859 response: 00:20:02.859 { 00:20:02.859 "code": -1, 00:20:02.859 "message": "Operation not permitted" 00:20:02.859 } 00:20:02.859 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:03.118 [2024-12-06 19:18:48.115226] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:03.118 [2024-12-06 19:18:48.115284] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:03.118 request: 00:20:03.118 { 00:20:03.118 "name": "TLSTEST", 00:20:03.118 "trtype": "tcp", 00:20:03.118 "traddr": "10.0.0.2", 00:20:03.118 "adrfam": "ipv4", 00:20:03.118 "trsvcid": "4420", 00:20:03.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.118 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.118 "prchk_reftag": false, 00:20:03.118 "prchk_guard": false, 00:20:03.118 "hdgst": false, 00:20:03.118 "ddgst": false, 00:20:03.118 "psk": "key0", 00:20:03.118 "allow_unrecognized_csi": false, 00:20:03.118 "method": "bdev_nvme_attach_controller", 00:20:03.118 "req_id": 1 00:20:03.118 } 00:20:03.118 Got JSON-RPC error response 00:20:03.118 response: 00:20:03.118 { 00:20:03.118 "code": -126, 00:20:03.118 "message": "Required key not available" 00:20:03.118 } 00:20:03.118 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 236134 00:20:03.118 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 236134 ']' 00:20:03.118 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 236134 00:20:03.118 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:03.118 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.118 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 236134 00:20:03.376 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:03.376 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:03.376 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 236134' 00:20:03.376 killing process with pid 236134 00:20:03.376 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 236134 00:20:03.376 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.376 00:20:03.376 Latency(us) 00:20:03.376 [2024-12-06T18:18:48.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.376 [2024-12-06T18:18:48.425Z] =================================================================================================================== 00:20:03.376 [2024-12-06T18:18:48.425Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:03.376 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 236134 00:20:03.376 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:03.376 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:03.377 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:03.377 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:03.377 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:03.377 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 232490 00:20:03.377 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 232490 ']' 00:20:03.377 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 232490 00:20:03.377 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:03.377 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.377 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 232490 00:20:03.377 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:03.377 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:03.377 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 232490' 00:20:03.377 killing process with pid 232490 00:20:03.377 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 232490 00:20:03.377 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 232490 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Nubxqbildy 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Nubxqbildy 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=236286 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 236286 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 236286 ']' 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.635 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.897 [2024-12-06 19:18:48.715438] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:03.897 [2024-12-06 19:18:48.715527] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.897 [2024-12-06 19:18:48.788413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.897 [2024-12-06 19:18:48.846222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.897 [2024-12-06 19:18:48.846299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.897 [2024-12-06 19:18:48.846312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.897 [2024-12-06 19:18:48.846323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.897 [2024-12-06 19:18:48.846333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.897 [2024-12-06 19:18:48.846930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.158 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.158 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:04.158 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:04.158 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:04.158 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.158 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.158 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Nubxqbildy 00:20:04.158 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Nubxqbildy 00:20:04.158 19:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:04.419 [2024-12-06 19:18:49.245167] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.419 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:04.680 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:04.939 [2024-12-06 19:18:49.790634] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:04.939 [2024-12-06 19:18:49.790976] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.939 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:05.215 malloc0 00:20:05.215 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:05.476 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Nubxqbildy 00:20:05.737 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:05.997 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Nubxqbildy 00:20:05.997 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:05.997 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:05.997 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:05.997 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Nubxqbildy 00:20:05.997 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:05.997 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=236604 00:20:05.997 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:05.997 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:05.997 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 236604 /var/tmp/bdevperf.sock 00:20:05.997 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 236604 ']' 00:20:05.997 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.997 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.997 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.997 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.997 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.256 [2024-12-06 19:18:51.072732] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:06.256 [2024-12-06 19:18:51.072831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236604 ] 00:20:06.256 [2024-12-06 19:18:51.139508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.256 [2024-12-06 19:18:51.196416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.515 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.515 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:06.515 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Nubxqbildy 00:20:06.810 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:07.105 [2024-12-06 19:18:51.895960] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.105 TLSTESTn1 00:20:07.105 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:07.105 Running I/O for 10 seconds... 00:20:09.098 3118.00 IOPS, 12.18 MiB/s [2024-12-06T18:18:55.521Z] 3374.50 IOPS, 13.18 MiB/s [2024-12-06T18:18:56.453Z] 3406.33 IOPS, 13.31 MiB/s [2024-12-06T18:18:57.385Z] 3401.25 IOPS, 13.29 MiB/s [2024-12-06T18:18:58.319Z] 3313.40 IOPS, 12.94 MiB/s [2024-12-06T18:18:59.259Z] 3313.67 IOPS, 12.94 MiB/s [2024-12-06T18:19:00.194Z] 3310.86 IOPS, 12.93 MiB/s [2024-12-06T18:19:01.134Z] 3329.38 IOPS, 13.01 MiB/s [2024-12-06T18:19:02.511Z] 3302.89 IOPS, 12.90 MiB/s [2024-12-06T18:19:02.511Z] 3317.20 IOPS, 12.96 MiB/s 00:20:17.462 Latency(us) 00:20:17.462 [2024-12-06T18:19:02.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.462 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:17.462 Verification LBA range: start 0x0 length 0x2000 00:20:17.462 TLSTESTn1 : 10.02 3323.21 12.98 0.00 0.00 38458.29 7621.59 50098.63 00:20:17.462 [2024-12-06T18:19:02.511Z] =================================================================================================================== 00:20:17.462 [2024-12-06T18:19:02.511Z] Total : 3323.21 12.98 0.00 0.00 38458.29 7621.59 50098.63 00:20:17.462 { 00:20:17.462 "results": [ 00:20:17.462 { 00:20:17.462 "job": "TLSTESTn1", 00:20:17.462 "core_mask": "0x4", 00:20:17.462 "workload": "verify", 00:20:17.462 "status": "finished", 00:20:17.462 "verify_range": { 00:20:17.462 "start": 0, 00:20:17.462 "length": 8192 00:20:17.462 }, 00:20:17.462 "queue_depth": 128, 00:20:17.462 "io_size": 4096, 00:20:17.462 "runtime": 10.019543, 00:20:17.462 "iops": 3323.2054595703617, 00:20:17.462 "mibps": 12.981271326446725, 00:20:17.462 "io_failed": 0, 00:20:17.462 "io_timeout": 0, 00:20:17.462 "avg_latency_us": 38458.288971423295, 00:20:17.462 "min_latency_us": 7621.594074074074, 00:20:17.462 "max_latency_us": 50098.63111111111 00:20:17.462 } 00:20:17.462 ], 00:20:17.462 "core_count": 1 00:20:17.462 } 00:20:17.462 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:17.462 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 236604 00:20:17.462 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 236604 ']' 00:20:17.462 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 236604 00:20:17.462 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:17.462 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.462 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 236604 00:20:17.462 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:17.462 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:17.462 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 236604' 00:20:17.462 killing process with pid 236604 00:20:17.462 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 236604 00:20:17.462 Received shutdown signal, test time was about 10.000000 seconds 00:20:17.462 00:20:17.462 Latency(us) 00:20:17.462 [2024-12-06T18:19:02.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.463 [2024-12-06T18:19:02.512Z] =================================================================================================================== 00:20:17.463 [2024-12-06T18:19:02.512Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 236604 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Nubxqbildy 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Nubxqbildy 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Nubxqbildy 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Nubxqbildy 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Nubxqbildy 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=237979 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 237979 /var/tmp/bdevperf.sock 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 237979 ']' 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.463 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.463 [2024-12-06 19:19:02.471319] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:17.463 [2024-12-06 19:19:02.471428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237979 ] 00:20:17.721 [2024-12-06 19:19:02.537645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.721 [2024-12-06 19:19:02.593325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.721 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.721 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:17.721 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Nubxqbildy 00:20:17.980 [2024-12-06 19:19:02.942226] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Nubxqbildy': 0100666 00:20:17.980 [2024-12-06 19:19:02.942278] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:17.980 request: 00:20:17.980 { 00:20:17.980 "name": "key0", 00:20:17.980 "path": "/tmp/tmp.Nubxqbildy", 00:20:17.980 "method": "keyring_file_add_key", 00:20:17.980 "req_id": 1 00:20:17.980 } 00:20:17.980 Got JSON-RPC error response 00:20:17.980 response: 00:20:17.980 { 00:20:17.980 "code": -1, 00:20:17.980 "message": "Operation not permitted" 00:20:17.980 } 00:20:17.980 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:18.238 [2024-12-06 19:19:03.247148] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.238 [2024-12-06 19:19:03.247210] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:18.238 request: 00:20:18.238 { 00:20:18.238 "name": "TLSTEST", 00:20:18.238 "trtype": "tcp", 00:20:18.238 "traddr": "10.0.0.2", 00:20:18.238 "adrfam": "ipv4", 00:20:18.238 "trsvcid": "4420", 00:20:18.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.239 "prchk_reftag": false, 00:20:18.239 "prchk_guard": false, 00:20:18.239 "hdgst": false, 00:20:18.239 "ddgst": false, 00:20:18.239 "psk": "key0", 00:20:18.239 "allow_unrecognized_csi": false, 00:20:18.239 "method": "bdev_nvme_attach_controller", 00:20:18.239 "req_id": 1 00:20:18.239 } 00:20:18.239 Got JSON-RPC error response 00:20:18.239 response: 00:20:18.239 { 00:20:18.239 "code": -126, 00:20:18.239 "message": "Required key not available" 00:20:18.239 } 00:20:18.239 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 237979 00:20:18.239 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 237979 ']' 00:20:18.239 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 237979 00:20:18.239 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:18.239 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.239 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 237979 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 237979' 00:20:18.497 killing process with pid 237979 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 237979 00:20:18.497 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.497 00:20:18.497 Latency(us) 00:20:18.497 [2024-12-06T18:19:03.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.497 [2024-12-06T18:19:03.546Z] =================================================================================================================== 00:20:18.497 [2024-12-06T18:19:03.546Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 237979 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 236286 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 236286 ']' 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 236286 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 236286 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 236286' 00:20:18.497 killing process with pid 236286 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 236286 00:20:18.497 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 236286 00:20:18.756 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:18.756 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:18.756 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:18.756 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.756 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:18.756 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=238180 00:20:18.756 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 238180 00:20:18.756 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 238180 ']' 00:20:18.756 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.756 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.756 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.756 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.756 19:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.015 [2024-12-06 19:19:03.820246] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:19.015 [2024-12-06 19:19:03.820318] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.015 [2024-12-06 19:19:03.889215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.015 [2024-12-06 19:19:03.942667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.015 [2024-12-06 19:19:03.942741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.015 [2024-12-06 19:19:03.942767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.015 [2024-12-06 19:19:03.942778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.015 [2024-12-06 19:19:03.942788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.015 [2024-12-06 19:19:03.943374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.015 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.015 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:19.015 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:19.015 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:19.015 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.274 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.274 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Nubxqbildy 00:20:19.274 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:19.274 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Nubxqbildy 00:20:19.274 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:19.274 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:19.274 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:19.274 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:19.274 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.Nubxqbildy 00:20:19.274 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Nubxqbildy 00:20:19.274 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:19.274 [2024-12-06 19:19:04.321507] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.533 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:19.791 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:20.049 [2024-12-06 19:19:04.862959] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:20.049 [2024-12-06 19:19:04.863229] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.049 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:20.306 malloc0 00:20:20.306 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:20.563 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Nubxqbildy 00:20:20.820 [2024-12-06 19:19:05.654878] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Nubxqbildy': 0100666 00:20:20.820 [2024-12-06 19:19:05.654924] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:20.820 request: 00:20:20.820 { 00:20:20.820 "name": "key0", 00:20:20.820 "path": "/tmp/tmp.Nubxqbildy", 00:20:20.820 "method": "keyring_file_add_key", 00:20:20.820 "req_id": 1 00:20:20.820 } 00:20:20.820 Got JSON-RPC error response 00:20:20.820 response: 00:20:20.820 { 00:20:20.820 "code": -1, 00:20:20.820 "message": "Operation not permitted" 00:20:20.820 } 00:20:20.820 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:21.078 [2024-12-06 19:19:05.923639] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:21.078 [2024-12-06 19:19:05.923688] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:21.078 request: 00:20:21.078 { 00:20:21.078 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.078 "host": "nqn.2016-06.io.spdk:host1", 00:20:21.078 "psk": "key0", 00:20:21.078 "method": "nvmf_subsystem_add_host", 00:20:21.078 "req_id": 1 00:20:21.078 } 00:20:21.078 Got JSON-RPC error response 00:20:21.078 response: 00:20:21.078 { 00:20:21.078 "code": -32603, 00:20:21.078 "message": "Internal error" 00:20:21.078 } 00:20:21.078 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:21.079 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:21.079 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:21.079 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:21.079 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 238180 00:20:21.079 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 238180 ']' 00:20:21.079 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 238180 00:20:21.079 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:21.079 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.079 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 238180 00:20:21.079 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:21.079 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:21.079 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 238180' 00:20:21.079 killing process with pid 238180 00:20:21.079 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 238180 00:20:21.079 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 238180 00:20:21.336 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Nubxqbildy 00:20:21.336 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:21.336 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:21.336 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:21.336 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.336 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=238485 00:20:21.336 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:21.336 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 238485 00:20:21.336 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 238485 ']' 00:20:21.336 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.336 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.336 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.336 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.336 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.336 [2024-12-06 19:19:06.281659] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:21.336 [2024-12-06 19:19:06.281776] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.336 [2024-12-06 19:19:06.352996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.594 [2024-12-06 19:19:06.408215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.594 [2024-12-06 19:19:06.408273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.594 [2024-12-06 19:19:06.408296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.594 [2024-12-06 19:19:06.408306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.594 [2024-12-06 19:19:06.408317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.594 [2024-12-06 19:19:06.408948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.594 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.594 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:21.594 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:21.594 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:21.594 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.594 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.594 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Nubxqbildy 00:20:21.594 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Nubxqbildy 00:20:21.594 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:21.852 [2024-12-06 19:19:06.795639] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.852 19:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:22.112 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:22.373 [2024-12-06 19:19:07.341141] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:22.373 [2024-12-06 19:19:07.341430] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.373 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:22.633 malloc0 00:20:22.633 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:22.892 19:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Nubxqbildy 00:20:23.150 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:23.718 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=238771 00:20:23.718 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:23.718 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:23.718 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 238771 /var/tmp/bdevperf.sock 00:20:23.718 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 238771 ']' 00:20:23.718 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.718 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.718 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.718 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.718 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.718 [2024-12-06 19:19:08.505578] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:23.718 [2024-12-06 19:19:08.505653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid238771 ] 00:20:23.718 [2024-12-06 19:19:08.572245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.718 [2024-12-06 19:19:08.628621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.718 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.718 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:23.718 19:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Nubxqbildy 00:20:24.287 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:24.287 [2024-12-06 19:19:09.302458] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.546 TLSTESTn1 00:20:24.546 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:24.805 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:24.805 "subsystems": [ 00:20:24.805 { 00:20:24.805 "subsystem": "keyring", 00:20:24.805 "config": [ 00:20:24.805 { 00:20:24.805 "method": "keyring_file_add_key", 00:20:24.805 "params": { 00:20:24.805 "name": "key0", 00:20:24.805 "path": "/tmp/tmp.Nubxqbildy" 00:20:24.805 } 00:20:24.805 } 00:20:24.805 ] 00:20:24.805 }, 00:20:24.805 { 00:20:24.805 "subsystem": "iobuf", 00:20:24.805 "config": [ 00:20:24.805 { 00:20:24.805 "method": "iobuf_set_options", 00:20:24.805 "params": { 00:20:24.805 "small_pool_count": 8192, 00:20:24.805 "large_pool_count": 1024, 00:20:24.805 "small_bufsize": 8192, 00:20:24.805 "large_bufsize": 135168, 00:20:24.805 "enable_numa": false 00:20:24.805 } 00:20:24.805 } 00:20:24.805 ] 00:20:24.805 }, 00:20:24.805 { 00:20:24.805 "subsystem": "sock", 00:20:24.805 "config": [ 00:20:24.805 { 00:20:24.805 "method": "sock_set_default_impl", 00:20:24.805 "params": { 00:20:24.805 "impl_name": "posix" 00:20:24.805 } 00:20:24.805 }, 00:20:24.805 { 00:20:24.805 "method": "sock_impl_set_options", 00:20:24.805 "params": { 00:20:24.805 "impl_name": "ssl", 00:20:24.805 "recv_buf_size": 4096, 00:20:24.805 "send_buf_size": 4096, 00:20:24.805 "enable_recv_pipe": true, 00:20:24.805 "enable_quickack": false, 00:20:24.805 "enable_placement_id": 0, 00:20:24.805 "enable_zerocopy_send_server": true, 00:20:24.805 "enable_zerocopy_send_client": false, 00:20:24.805 "zerocopy_threshold": 0, 00:20:24.805 "tls_version": 0, 00:20:24.805 "enable_ktls": false 00:20:24.805 } 00:20:24.805 }, 00:20:24.805 { 00:20:24.805 "method": "sock_impl_set_options", 00:20:24.805 "params": { 00:20:24.805 "impl_name": "posix", 00:20:24.805 "recv_buf_size": 2097152, 00:20:24.805 "send_buf_size": 2097152, 00:20:24.805 "enable_recv_pipe": true, 00:20:24.805 "enable_quickack": false, 00:20:24.805 "enable_placement_id": 0, 00:20:24.805 "enable_zerocopy_send_server": true, 00:20:24.805 "enable_zerocopy_send_client": false, 00:20:24.805 "zerocopy_threshold": 0, 00:20:24.805 "tls_version": 0, 00:20:24.805 "enable_ktls": false 00:20:24.805 } 00:20:24.805 } 00:20:24.805 ] 00:20:24.805 }, 00:20:24.805 { 00:20:24.805 "subsystem": "vmd", 00:20:24.805 "config": [] 00:20:24.805 }, 00:20:24.805 { 00:20:24.805 "subsystem": "accel", 00:20:24.805 "config": [ 00:20:24.805 { 00:20:24.805 "method": "accel_set_options", 00:20:24.805 "params": { 00:20:24.805 "small_cache_size": 128, 00:20:24.805 "large_cache_size": 16, 00:20:24.805 "task_count": 2048, 00:20:24.805 "sequence_count": 2048, 00:20:24.805 "buf_count": 2048 00:20:24.805 } 00:20:24.805 } 00:20:24.805 ] 00:20:24.805 }, 00:20:24.805 { 00:20:24.805 "subsystem": "bdev", 00:20:24.805 "config": [ 00:20:24.805 { 00:20:24.805 "method": "bdev_set_options", 00:20:24.805 "params": { 00:20:24.805 "bdev_io_pool_size": 65535, 00:20:24.805 "bdev_io_cache_size": 256, 00:20:24.805 "bdev_auto_examine": true, 00:20:24.805 "iobuf_small_cache_size": 128, 00:20:24.805 "iobuf_large_cache_size": 16 00:20:24.805 } 00:20:24.806 }, 00:20:24.806 { 00:20:24.806 "method": "bdev_raid_set_options", 00:20:24.806 "params": { 00:20:24.806 "process_window_size_kb": 1024, 00:20:24.806 "process_max_bandwidth_mb_sec": 0 00:20:24.806 } 00:20:24.806 }, 00:20:24.806 { 00:20:24.806 "method": "bdev_iscsi_set_options", 00:20:24.806 "params": { 00:20:24.806 "timeout_sec": 30 00:20:24.806 } 00:20:24.806 }, 00:20:24.806 { 00:20:24.806 "method": "bdev_nvme_set_options", 00:20:24.806 "params": { 00:20:24.806 "action_on_timeout": "none", 00:20:24.806 "timeout_us": 0, 00:20:24.806 "timeout_admin_us": 0, 00:20:24.806 "keep_alive_timeout_ms": 10000, 00:20:24.806 "arbitration_burst": 0, 00:20:24.806 "low_priority_weight": 0, 00:20:24.806 "medium_priority_weight": 0, 00:20:24.806 "high_priority_weight": 0, 00:20:24.806 "nvme_adminq_poll_period_us": 10000, 00:20:24.806 "nvme_ioq_poll_period_us": 0, 00:20:24.806 "io_queue_requests": 0, 00:20:24.806 "delay_cmd_submit": true, 00:20:24.806 "transport_retry_count": 4, 00:20:24.806 "bdev_retry_count": 3, 00:20:24.806 "transport_ack_timeout": 0, 00:20:24.806 "ctrlr_loss_timeout_sec": 0, 00:20:24.806 "reconnect_delay_sec": 0, 00:20:24.806 "fast_io_fail_timeout_sec": 0, 00:20:24.806 "disable_auto_failback": false, 00:20:24.806 "generate_uuids": false, 00:20:24.806 "transport_tos": 0, 00:20:24.806 "nvme_error_stat": false, 00:20:24.806 "rdma_srq_size": 0, 00:20:24.806 "io_path_stat": false, 00:20:24.806 "allow_accel_sequence": false, 00:20:24.806 "rdma_max_cq_size": 0, 00:20:24.806 "rdma_cm_event_timeout_ms": 0, 00:20:24.806 "dhchap_digests": [ 00:20:24.806 "sha256", 00:20:24.806 "sha384", 00:20:24.806 "sha512" 00:20:24.806 ], 00:20:24.806 "dhchap_dhgroups": [ 00:20:24.806 "null", 00:20:24.806 "ffdhe2048", 00:20:24.806 "ffdhe3072", 00:20:24.806 "ffdhe4096", 00:20:24.806 "ffdhe6144", 00:20:24.806 "ffdhe8192" 00:20:24.806 ] 00:20:24.806 } 00:20:24.806 }, 00:20:24.806 { 00:20:24.806 "method": "bdev_nvme_set_hotplug", 00:20:24.806 "params": { 00:20:24.806 "period_us": 100000, 00:20:24.806 "enable": false 00:20:24.806 } 00:20:24.806 }, 00:20:24.806 { 00:20:24.806 "method": "bdev_malloc_create", 00:20:24.806 "params": { 00:20:24.806 "name": "malloc0", 00:20:24.806 "num_blocks": 8192, 00:20:24.806 "block_size": 4096, 00:20:24.806 "physical_block_size": 4096, 00:20:24.806 "uuid": "7d29ef8f-f780-4181-b783-b0e4cffc82e1", 00:20:24.806 "optimal_io_boundary": 0, 00:20:24.806 "md_size": 0, 00:20:24.806 "dif_type": 0, 00:20:24.806 "dif_is_head_of_md": false, 00:20:24.806 "dif_pi_format": 0 00:20:24.806 } 00:20:24.806 }, 00:20:24.806 { 00:20:24.806 "method": "bdev_wait_for_examine" 00:20:24.806 } 00:20:24.806 ] 00:20:24.806 }, 00:20:24.806 { 00:20:24.806 "subsystem": "nbd", 00:20:24.806 "config": [] 00:20:24.806 }, 00:20:24.806 { 00:20:24.806 "subsystem": "scheduler", 00:20:24.806 "config": [ 00:20:24.806 { 00:20:24.806 "method": "framework_set_scheduler", 00:20:24.806 "params": { 00:20:24.806 "name": "static" 00:20:24.806 } 00:20:24.806 } 00:20:24.806 ] 00:20:24.806 }, 00:20:24.806 { 00:20:24.806 "subsystem": "nvmf", 00:20:24.806 "config": [ 00:20:24.806 { 00:20:24.806 "method": "nvmf_set_config", 00:20:24.806 "params": { 00:20:24.806 "discovery_filter": "match_any", 00:20:24.806 "admin_cmd_passthru": { 00:20:24.806 "identify_ctrlr": false 00:20:24.806 }, 00:20:24.806 "dhchap_digests": [ 00:20:24.806 "sha256", 00:20:24.806 "sha384", 00:20:24.806 "sha512" 00:20:24.806 ], 00:20:24.806 "dhchap_dhgroups": [ 00:20:24.806 "null", 00:20:24.806 "ffdhe2048", 00:20:24.806 "ffdhe3072", 00:20:24.806 "ffdhe4096", 00:20:24.806 "ffdhe6144", 00:20:24.806 "ffdhe8192" 00:20:24.806 ] 00:20:24.806 } 00:20:24.806 }, 00:20:24.806 { 00:20:24.806 "method": "nvmf_set_max_subsystems", 00:20:24.806 "params": { 00:20:24.806 "max_subsystems": 1024 00:20:24.806 } 00:20:24.806 }, 00:20:24.806 { 00:20:24.806 "method": "nvmf_set_crdt", 00:20:24.806 "params": { 00:20:24.806 "crdt1": 0, 00:20:24.806 "crdt2": 0, 00:20:24.806 "crdt3": 0 00:20:24.806 } 00:20:24.806 }, 00:20:24.806 { 00:20:24.806 "method": "nvmf_create_transport", 00:20:24.806 "params": { 00:20:24.806 "trtype": "TCP", 00:20:24.806 "max_queue_depth": 128, 00:20:24.806 "max_io_qpairs_per_ctrlr": 127, 00:20:24.806 "in_capsule_data_size": 4096, 00:20:24.806 "max_io_size": 131072, 00:20:24.806 "io_unit_size": 131072, 00:20:24.806 "max_aq_depth": 128, 00:20:24.806 "num_shared_buffers": 511, 00:20:24.806 "buf_cache_size": 4294967295, 00:20:24.806 "dif_insert_or_strip": false, 00:20:24.806 "zcopy": false, 00:20:24.806 "c2h_success": false, 00:20:24.806 "sock_priority": 0, 00:20:24.806 "abort_timeout_sec": 1, 00:20:24.806 "ack_timeout": 0, 00:20:24.806 "data_wr_pool_size": 0 00:20:24.806 } 00:20:24.806 }, 00:20:24.806 { 00:20:24.806 "method": "nvmf_create_subsystem", 00:20:24.806 "params": { 00:20:24.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.806 "allow_any_host": false, 00:20:24.806 "serial_number": "SPDK00000000000001", 00:20:24.806 "model_number": "SPDK bdev Controller", 00:20:24.806 "max_namespaces": 10, 00:20:24.806 "min_cntlid": 1, 00:20:24.806 "max_cntlid": 65519, 00:20:24.806 "ana_reporting": false 00:20:24.806 } 00:20:24.806 }, 00:20:24.806 { 00:20:24.806 "method": "nvmf_subsystem_add_host", 00:20:24.806 "params": { 00:20:24.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.806 "host": "nqn.2016-06.io.spdk:host1", 00:20:24.806 "psk": "key0" 00:20:24.806 } 00:20:24.806 }, 00:20:24.806 { 00:20:24.806 "method": "nvmf_subsystem_add_ns", 00:20:24.806 "params": { 00:20:24.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.806 "namespace": { 00:20:24.806 "nsid": 1, 00:20:24.806 "bdev_name": "malloc0", 00:20:24.806 "nguid": "7D29EF8FF7804181B783B0E4CFFC82E1", 00:20:24.806 "uuid": "7d29ef8f-f780-4181-b783-b0e4cffc82e1", 00:20:24.806 "no_auto_visible": false 00:20:24.806 } 00:20:24.806 } 00:20:24.806 }, 00:20:24.806 { 00:20:24.806 "method": "nvmf_subsystem_add_listener", 00:20:24.806 "params": { 00:20:24.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.806 "listen_address": { 00:20:24.806 "trtype": "TCP", 00:20:24.806 "adrfam": "IPv4", 00:20:24.806 "traddr": "10.0.0.2", 00:20:24.806 "trsvcid": "4420" 00:20:24.806 }, 00:20:24.806 "secure_channel": true 00:20:24.806 } 00:20:24.806 } 00:20:24.806 ] 00:20:24.806 } 00:20:24.806 ] 00:20:24.806 }' 00:20:24.806 19:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:25.065 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:25.065 "subsystems": [ 00:20:25.065 { 00:20:25.065 "subsystem": "keyring", 00:20:25.065 "config": [ 00:20:25.065 { 00:20:25.065 "method": "keyring_file_add_key", 00:20:25.065 "params": { 00:20:25.065 "name": "key0", 00:20:25.065 "path": "/tmp/tmp.Nubxqbildy" 00:20:25.065 } 00:20:25.065 } 00:20:25.065 ] 00:20:25.065 }, 00:20:25.065 { 00:20:25.065 "subsystem": "iobuf", 00:20:25.065 "config": [ 00:20:25.065 { 00:20:25.065 "method": "iobuf_set_options", 00:20:25.065 "params": { 00:20:25.065 "small_pool_count": 8192, 00:20:25.065 "large_pool_count": 1024, 00:20:25.065 "small_bufsize": 8192, 00:20:25.065 "large_bufsize": 135168, 00:20:25.065 "enable_numa": false 00:20:25.065 } 00:20:25.065 } 00:20:25.065 ] 00:20:25.065 }, 00:20:25.065 { 00:20:25.065 "subsystem": "sock", 00:20:25.065 "config": [ 00:20:25.065 { 00:20:25.065 "method": "sock_set_default_impl", 00:20:25.065 "params": { 00:20:25.065 "impl_name": "posix" 00:20:25.065 } 00:20:25.065 }, 00:20:25.065 { 00:20:25.065 "method": "sock_impl_set_options", 00:20:25.065 "params": { 00:20:25.065 "impl_name": "ssl", 00:20:25.065 "recv_buf_size": 4096, 00:20:25.065 "send_buf_size": 4096, 00:20:25.065 "enable_recv_pipe": true, 00:20:25.065 "enable_quickack": false, 00:20:25.065 "enable_placement_id": 0, 00:20:25.066 "enable_zerocopy_send_server": true, 00:20:25.066 "enable_zerocopy_send_client": false, 00:20:25.066 "zerocopy_threshold": 0, 00:20:25.066 "tls_version": 0, 00:20:25.066 "enable_ktls": false 00:20:25.066 } 00:20:25.066 }, 00:20:25.066 { 00:20:25.066 "method": "sock_impl_set_options", 00:20:25.066 "params": { 00:20:25.066 "impl_name": "posix", 00:20:25.066 "recv_buf_size": 2097152, 00:20:25.066 "send_buf_size": 2097152, 00:20:25.066 "enable_recv_pipe": true, 00:20:25.066 "enable_quickack": false, 00:20:25.066 "enable_placement_id": 0, 00:20:25.066 "enable_zerocopy_send_server": true, 00:20:25.066 "enable_zerocopy_send_client": false, 00:20:25.066 "zerocopy_threshold": 0, 00:20:25.066 "tls_version": 0, 00:20:25.066 "enable_ktls": false 00:20:25.066 } 00:20:25.066 } 00:20:25.066 ] 00:20:25.066 }, 00:20:25.066 { 00:20:25.066 "subsystem": "vmd", 00:20:25.066 "config": [] 00:20:25.066 }, 00:20:25.066 { 00:20:25.066 "subsystem": "accel", 00:20:25.066 "config": [ 00:20:25.066 { 00:20:25.066 "method": "accel_set_options", 00:20:25.066 "params": { 00:20:25.066 "small_cache_size": 128, 00:20:25.066 "large_cache_size": 16, 00:20:25.066 "task_count": 2048, 00:20:25.066 "sequence_count": 2048, 00:20:25.066 "buf_count": 2048 00:20:25.066 } 00:20:25.066 } 00:20:25.066 ] 00:20:25.066 }, 00:20:25.066 { 00:20:25.066 "subsystem": "bdev", 00:20:25.066 "config": [ 00:20:25.066 { 00:20:25.066 "method": "bdev_set_options", 00:20:25.066 "params": { 00:20:25.066 "bdev_io_pool_size": 65535, 00:20:25.066 "bdev_io_cache_size": 256, 00:20:25.066 "bdev_auto_examine": true, 00:20:25.066 "iobuf_small_cache_size": 128, 00:20:25.066 "iobuf_large_cache_size": 16 00:20:25.066 } 00:20:25.066 }, 00:20:25.066 { 00:20:25.066 "method": "bdev_raid_set_options", 00:20:25.066 "params": { 00:20:25.066 "process_window_size_kb": 1024, 00:20:25.066 "process_max_bandwidth_mb_sec": 0 00:20:25.066 } 00:20:25.066 }, 00:20:25.066 { 00:20:25.066 "method": "bdev_iscsi_set_options", 00:20:25.066 "params": { 00:20:25.066 "timeout_sec": 30 00:20:25.066 } 00:20:25.066 }, 00:20:25.066 { 00:20:25.066 "method": "bdev_nvme_set_options", 00:20:25.066 "params": { 00:20:25.066 "action_on_timeout": "none", 00:20:25.066 "timeout_us": 0, 00:20:25.066 "timeout_admin_us": 0, 00:20:25.066 "keep_alive_timeout_ms": 10000, 00:20:25.066 "arbitration_burst": 0, 00:20:25.066 "low_priority_weight": 0, 00:20:25.066 "medium_priority_weight": 0, 00:20:25.066 "high_priority_weight": 0, 00:20:25.066 "nvme_adminq_poll_period_us": 10000, 00:20:25.066 "nvme_ioq_poll_period_us": 0, 00:20:25.066 "io_queue_requests": 512, 00:20:25.066 "delay_cmd_submit": true, 00:20:25.066 "transport_retry_count": 4, 00:20:25.066 "bdev_retry_count": 3, 00:20:25.066 "transport_ack_timeout": 0, 00:20:25.066 "ctrlr_loss_timeout_sec": 0, 00:20:25.066 "reconnect_delay_sec": 0, 00:20:25.066 "fast_io_fail_timeout_sec": 0, 00:20:25.066 "disable_auto_failback": false, 00:20:25.066 "generate_uuids": false, 00:20:25.066 "transport_tos": 0, 00:20:25.066 "nvme_error_stat": false, 00:20:25.066 "rdma_srq_size": 0, 00:20:25.066 "io_path_stat": false, 00:20:25.066 "allow_accel_sequence": false, 00:20:25.066 "rdma_max_cq_size": 0, 00:20:25.066 "rdma_cm_event_timeout_ms": 0, 00:20:25.066 "dhchap_digests": [ 00:20:25.066 "sha256", 00:20:25.066 "sha384", 00:20:25.066 "sha512" 00:20:25.066 ], 00:20:25.066 "dhchap_dhgroups": [ 00:20:25.066 "null", 00:20:25.066 "ffdhe2048", 00:20:25.066 "ffdhe3072", 00:20:25.066 "ffdhe4096", 00:20:25.066 "ffdhe6144", 00:20:25.066 "ffdhe8192" 00:20:25.066 ] 00:20:25.066 } 00:20:25.066 }, 00:20:25.066 { 00:20:25.066 "method": "bdev_nvme_attach_controller", 00:20:25.066 "params": { 00:20:25.066 "name": "TLSTEST", 00:20:25.066 "trtype": "TCP", 00:20:25.066 "adrfam": "IPv4", 00:20:25.066 "traddr": "10.0.0.2", 00:20:25.066 "trsvcid": "4420", 00:20:25.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.066 "prchk_reftag": false, 00:20:25.066 "prchk_guard": false, 00:20:25.066 "ctrlr_loss_timeout_sec": 0, 00:20:25.066 "reconnect_delay_sec": 0, 00:20:25.066 "fast_io_fail_timeout_sec": 0, 00:20:25.066 "psk": "key0", 00:20:25.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.066 "hdgst": false, 00:20:25.066 "ddgst": false, 00:20:25.066 "multipath": "multipath" 00:20:25.066 } 00:20:25.066 }, 00:20:25.066 { 00:20:25.066 "method": "bdev_nvme_set_hotplug", 00:20:25.066 "params": { 00:20:25.066 "period_us": 100000, 00:20:25.066 "enable": false 00:20:25.066 } 00:20:25.066 }, 00:20:25.066 { 00:20:25.066 "method": "bdev_wait_for_examine" 00:20:25.066 } 00:20:25.066 ] 00:20:25.066 }, 00:20:25.066 { 00:20:25.066 "subsystem": "nbd", 00:20:25.066 "config": [] 00:20:25.066 } 00:20:25.066 ] 00:20:25.066 }' 00:20:25.066 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 238771 00:20:25.066 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 238771 ']' 00:20:25.066 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 238771 00:20:25.066 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:25.066 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.066 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 238771 00:20:25.066 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:25.066 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:25.066 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 238771' 00:20:25.066 killing process with pid 238771 00:20:25.066 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 238771 00:20:25.066 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.066 00:20:25.066 Latency(us) 00:20:25.066 [2024-12-06T18:19:10.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.066 [2024-12-06T18:19:10.115Z] =================================================================================================================== 00:20:25.066 [2024-12-06T18:19:10.115Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:25.066 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 238771 00:20:25.325 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 238485 00:20:25.325 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 238485 ']' 00:20:25.325 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 238485 00:20:25.325 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:25.325 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.325 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 238485 00:20:25.325 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:25.325 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:25.325 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 238485' 00:20:25.325 killing process with pid 238485 00:20:25.325 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 238485 00:20:25.325 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 238485 00:20:25.583 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:25.583 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:25.583 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:25.583 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:25.583 "subsystems": [ 00:20:25.583 { 00:20:25.583 "subsystem": "keyring", 00:20:25.583 "config": [ 00:20:25.583 { 00:20:25.583 "method": "keyring_file_add_key", 00:20:25.583 "params": { 00:20:25.583 "name": "key0", 00:20:25.583 "path": "/tmp/tmp.Nubxqbildy" 00:20:25.583 } 00:20:25.583 } 00:20:25.583 ] 00:20:25.583 }, 00:20:25.583 { 00:20:25.583 "subsystem": "iobuf", 00:20:25.583 "config": [ 00:20:25.583 { 00:20:25.583 "method": "iobuf_set_options", 00:20:25.583 "params": { 00:20:25.583 "small_pool_count": 8192, 00:20:25.583 "large_pool_count": 1024, 00:20:25.583 "small_bufsize": 8192, 00:20:25.583 "large_bufsize": 135168, 00:20:25.583 "enable_numa": false 00:20:25.583 } 00:20:25.583 } 00:20:25.583 ] 00:20:25.583 }, 00:20:25.583 { 00:20:25.583 "subsystem": "sock", 00:20:25.583 "config": [ 00:20:25.583 { 00:20:25.583 "method": "sock_set_default_impl", 00:20:25.583 "params": { 00:20:25.583 "impl_name": "posix" 00:20:25.583 } 00:20:25.583 }, 00:20:25.583 { 00:20:25.583 "method": "sock_impl_set_options", 00:20:25.583 "params": { 00:20:25.583 "impl_name": "ssl", 00:20:25.583 "recv_buf_size": 4096, 00:20:25.583 "send_buf_size": 4096, 00:20:25.583 "enable_recv_pipe": true, 00:20:25.583 "enable_quickack": false, 00:20:25.583 "enable_placement_id": 0, 00:20:25.583 "enable_zerocopy_send_server": true, 00:20:25.583 "enable_zerocopy_send_client": false, 00:20:25.583 "zerocopy_threshold": 0, 00:20:25.583 "tls_version": 0, 00:20:25.583 "enable_ktls": false 00:20:25.583 } 00:20:25.583 }, 00:20:25.583 { 00:20:25.583 "method": "sock_impl_set_options", 00:20:25.583 "params": { 00:20:25.583 "impl_name": "posix", 00:20:25.584 "recv_buf_size": 2097152, 00:20:25.584 "send_buf_size": 2097152, 00:20:25.584 "enable_recv_pipe": true, 00:20:25.584 "enable_quickack": false, 00:20:25.584 "enable_placement_id": 0, 00:20:25.584 "enable_zerocopy_send_server": true, 00:20:25.584 "enable_zerocopy_send_client": false, 00:20:25.584 "zerocopy_threshold": 0, 00:20:25.584 "tls_version": 0, 00:20:25.584 "enable_ktls": false 00:20:25.584 } 00:20:25.584 } 00:20:25.584 ] 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "subsystem": "vmd", 00:20:25.584 "config": [] 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "subsystem": "accel", 00:20:25.584 "config": [ 00:20:25.584 { 00:20:25.584 "method": "accel_set_options", 00:20:25.584 "params": { 00:20:25.584 "small_cache_size": 128, 00:20:25.584 "large_cache_size": 16, 00:20:25.584 "task_count": 2048, 00:20:25.584 "sequence_count": 2048, 00:20:25.584 "buf_count": 2048 00:20:25.584 } 00:20:25.584 } 00:20:25.584 ] 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "subsystem": "bdev", 00:20:25.584 "config": [ 00:20:25.584 { 00:20:25.584 "method": "bdev_set_options", 00:20:25.584 "params": { 00:20:25.584 "bdev_io_pool_size": 65535, 00:20:25.584 "bdev_io_cache_size": 256, 00:20:25.584 "bdev_auto_examine": true, 00:20:25.584 "iobuf_small_cache_size": 128, 00:20:25.584 "iobuf_large_cache_size": 16 00:20:25.584 } 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "method": "bdev_raid_set_options", 00:20:25.584 "params": { 00:20:25.584 "process_window_size_kb": 1024, 00:20:25.584 "process_max_bandwidth_mb_sec": 0 00:20:25.584 } 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "method": "bdev_iscsi_set_options", 00:20:25.584 "params": { 00:20:25.584 "timeout_sec": 30 00:20:25.584 } 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "method": "bdev_nvme_set_options", 00:20:25.584 "params": { 00:20:25.584 "action_on_timeout": "none", 00:20:25.584 "timeout_us": 0, 00:20:25.584 "timeout_admin_us": 0, 00:20:25.584 "keep_alive_timeout_ms": 10000, 00:20:25.584 "arbitration_burst": 0, 00:20:25.584 "low_priority_weight": 0, 00:20:25.584 "medium_priority_weight": 0, 00:20:25.584 "high_priority_weight": 0, 00:20:25.584 "nvme_adminq_poll_period_us": 10000, 00:20:25.584 "nvme_ioq_poll_period_us": 0, 00:20:25.584 "io_queue_requests": 0, 00:20:25.584 "delay_cmd_submit": true, 00:20:25.584 "transport_retry_count": 4, 00:20:25.584 "bdev_retry_count": 3, 00:20:25.584 "transport_ack_timeout": 0, 00:20:25.584 "ctrlr_loss_timeout_sec": 0, 00:20:25.584 "reconnect_delay_sec": 0, 00:20:25.584 "fast_io_fail_timeout_sec": 0, 00:20:25.584 "disable_auto_failback": false, 00:20:25.584 "generate_uuids": false, 00:20:25.584 "transport_tos": 0, 00:20:25.584 "nvme_error_stat": false, 00:20:25.584 "rdma_srq_size": 0, 00:20:25.584 "io_path_stat": false, 00:20:25.584 "allow_accel_sequence": false, 00:20:25.584 "rdma_max_cq_size": 0, 00:20:25.584 "rdma_cm_event_timeout_ms": 0, 00:20:25.584 "dhchap_digests": [ 00:20:25.584 "sha256", 00:20:25.584 "sha384", 00:20:25.584 "sha512" 00:20:25.584 ], 00:20:25.584 "dhchap_dhgroups": [ 00:20:25.584 "null", 00:20:25.584 "ffdhe2048", 00:20:25.584 "ffdhe3072", 00:20:25.584 "ffdhe4096", 00:20:25.584 "ffdhe6144", 00:20:25.584 "ffdhe8192" 00:20:25.584 ] 00:20:25.584 } 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "method": "bdev_nvme_set_hotplug", 00:20:25.584 "params": { 00:20:25.584 "period_us": 100000, 00:20:25.584 "enable": false 00:20:25.584 } 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "method": "bdev_malloc_create", 00:20:25.584 "params": { 00:20:25.584 "name": "malloc0", 00:20:25.584 "num_blocks": 8192, 00:20:25.584 "block_size": 4096, 00:20:25.584 "physical_block_size": 4096, 00:20:25.584 "uuid": "7d29ef8f-f780-4181-b783-b0e4cffc82e1", 00:20:25.584 "optimal_io_boundary": 0, 00:20:25.584 "md_size": 0, 00:20:25.584 "dif_type": 0, 00:20:25.584 "dif_is_head_of_md": false, 00:20:25.584 "dif_pi_format": 0 00:20:25.584 } 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "method": "bdev_wait_for_examine" 00:20:25.584 } 00:20:25.584 ] 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "subsystem": "nbd", 00:20:25.584 "config": [] 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "subsystem": "scheduler", 00:20:25.584 "config": [ 00:20:25.584 { 00:20:25.584 "method": "framework_set_scheduler", 00:20:25.584 "params": { 00:20:25.584 "name": "static" 00:20:25.584 } 00:20:25.584 } 00:20:25.584 ] 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "subsystem": "nvmf", 00:20:25.584 "config": [ 00:20:25.584 { 00:20:25.584 "method": "nvmf_set_config", 00:20:25.584 "params": { 00:20:25.584 "discovery_filter": "match_any", 00:20:25.584 "admin_cmd_passthru": { 00:20:25.584 "identify_ctrlr": false 00:20:25.584 }, 00:20:25.584 "dhchap_digests": [ 00:20:25.584 "sha256", 00:20:25.584 "sha384", 00:20:25.584 "sha512" 00:20:25.584 ], 00:20:25.584 "dhchap_dhgroups": [ 00:20:25.584 "null", 00:20:25.584 "ffdhe2048", 00:20:25.584 "ffdhe3072", 00:20:25.584 "ffdhe4096", 00:20:25.584 "ffdhe6144", 00:20:25.584 "ffdhe8192" 00:20:25.584 ] 00:20:25.584 } 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "method": "nvmf_set_max_subsystems", 00:20:25.584 "params": { 00:20:25.584 "max_subsystems": 1024 00:20:25.584 } 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "method": "nvmf_set_crdt", 00:20:25.584 "params": { 00:20:25.584 "crdt1": 0, 00:20:25.584 "crdt2": 0, 00:20:25.584 "crdt3": 0 00:20:25.584 } 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "method": "nvmf_create_transport", 00:20:25.584 "params": { 00:20:25.584 "trtype": "TCP", 00:20:25.584 "max_queue_depth": 128, 00:20:25.584 "max_io_qpairs_per_ctrlr": 127, 00:20:25.584 "in_capsule_data_size": 4096, 00:20:25.584 "max_io_size": 131072, 00:20:25.584 "io_unit_size": 131072, 00:20:25.584 "max_aq_depth": 128, 00:20:25.584 "num_shared_buffers": 511, 00:20:25.584 "buf_cache_size": 4294967295, 00:20:25.584 "dif_insert_or_strip": false, 00:20:25.584 "zcopy": false, 00:20:25.584 "c2h_success": false, 00:20:25.584 "sock_priority": 0, 00:20:25.584 "abort_timeout_sec": 1, 00:20:25.584 "ack_timeout": 0, 00:20:25.584 "data_wr_pool_size": 0 00:20:25.584 } 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "method": "nvmf_create_subsystem", 00:20:25.584 "params": { 00:20:25.584 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.584 "allow_any_host": false, 00:20:25.584 "serial_number": "SPDK00000000000001", 00:20:25.584 "model_number": "SPDK bdev Controller", 00:20:25.584 "max_namespaces": 10, 00:20:25.584 "min_cntlid": 1, 00:20:25.584 "max_cntlid": 65519, 00:20:25.584 "ana_reporting": false 00:20:25.584 } 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "method": "nvmf_subsystem_add_host", 00:20:25.584 "params": { 00:20:25.584 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.584 "host": "nqn.2016-06.io.spdk:host1", 00:20:25.584 "psk": "key0" 00:20:25.584 } 00:20:25.584 }, 00:20:25.584 { 00:20:25.584 "method": "nvmf_subsystem_add_ns", 00:20:25.584 "params": { 00:20:25.584 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.584 "namespace": { 00:20:25.584 "nsid": 1, 00:20:25.584 "bdev_name": "malloc0", 00:20:25.584 "nguid": "7D29EF8FF7804181B783B0E4CFFC82E1", 00:20:25.584 "uuid": "7d29ef8f-f780-4181-b783-b0e4cffc82e1", 00:20:25.584 "no_auto_visible": false 00:20:25.584 } 00:20:25.584 } 00:20:25.585 }, 00:20:25.585 { 00:20:25.585 "method": "nvmf_subsystem_add_listener", 00:20:25.585 "params": { 00:20:25.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.585 "listen_address": { 00:20:25.585 "trtype": "TCP", 00:20:25.585 "adrfam": "IPv4", 00:20:25.585 "traddr": "10.0.0.2", 00:20:25.585 "trsvcid": "4420" 00:20:25.585 }, 00:20:25.585 "secure_channel": true 00:20:25.585 } 00:20:25.585 } 00:20:25.585 ] 00:20:25.585 } 00:20:25.585 ] 00:20:25.585 }' 00:20:25.585 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.585 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=239050 00:20:25.585 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:25.585 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 239050 00:20:25.585 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 239050 ']' 00:20:25.585 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.585 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.585 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.585 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.585 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.845 [2024-12-06 19:19:10.669587] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:25.845 [2024-12-06 19:19:10.669667] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.845 [2024-12-06 19:19:10.737693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.845 [2024-12-06 19:19:10.789888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.845 [2024-12-06 19:19:10.789953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.845 [2024-12-06 19:19:10.789976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.845 [2024-12-06 19:19:10.789986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.845 [2024-12-06 19:19:10.789995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.845 [2024-12-06 19:19:10.790637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.127 [2024-12-06 19:19:11.033996] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.127 [2024-12-06 19:19:11.066019] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:26.127 [2024-12-06 19:19:11.066297] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.694 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.694 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:26.694 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:26.694 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:26.694 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.694 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.694 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=239203 00:20:26.694 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 239203 /var/tmp/bdevperf.sock 00:20:26.694 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 239203 ']' 00:20:26.694 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:26.694 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.694 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.694 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:26.695 "subsystems": [ 00:20:26.695 { 00:20:26.695 "subsystem": "keyring", 00:20:26.695 "config": [ 00:20:26.695 { 00:20:26.695 "method": "keyring_file_add_key", 00:20:26.695 "params": { 00:20:26.695 "name": "key0", 00:20:26.695 "path": "/tmp/tmp.Nubxqbildy" 00:20:26.695 } 00:20:26.695 } 00:20:26.695 ] 00:20:26.695 }, 00:20:26.695 { 00:20:26.695 "subsystem": "iobuf", 00:20:26.695 "config": [ 00:20:26.695 { 00:20:26.695 "method": "iobuf_set_options", 00:20:26.695 "params": { 00:20:26.695 "small_pool_count": 8192, 00:20:26.695 "large_pool_count": 1024, 00:20:26.695 "small_bufsize": 8192, 00:20:26.695 "large_bufsize": 135168, 00:20:26.695 "enable_numa": false 00:20:26.695 } 00:20:26.695 } 00:20:26.695 ] 00:20:26.695 }, 00:20:26.695 { 00:20:26.695 "subsystem": "sock", 00:20:26.695 "config": [ 00:20:26.695 { 00:20:26.695 "method": "sock_set_default_impl", 00:20:26.695 "params": { 00:20:26.695 "impl_name": "posix" 00:20:26.695 } 00:20:26.695 }, 00:20:26.695 { 00:20:26.695 "method": "sock_impl_set_options", 00:20:26.695 "params": { 00:20:26.695 "impl_name": "ssl", 00:20:26.695 "recv_buf_size": 4096, 00:20:26.695 "send_buf_size": 4096, 00:20:26.695 "enable_recv_pipe": true, 00:20:26.695 "enable_quickack": false, 00:20:26.695 "enable_placement_id": 0, 00:20:26.695 "enable_zerocopy_send_server": true, 00:20:26.695 "enable_zerocopy_send_client": false, 00:20:26.695 "zerocopy_threshold": 0, 00:20:26.695 "tls_version": 0, 00:20:26.695 "enable_ktls": false 00:20:26.695 } 00:20:26.695 }, 00:20:26.695 { 00:20:26.695 "method": "sock_impl_set_options", 00:20:26.695 "params": { 00:20:26.695 "impl_name": "posix", 00:20:26.695 "recv_buf_size": 2097152, 00:20:26.695 "send_buf_size": 2097152, 00:20:26.695 "enable_recv_pipe": true, 00:20:26.695 "enable_quickack": false, 00:20:26.695 "enable_placement_id": 0, 00:20:26.695 "enable_zerocopy_send_server": true, 00:20:26.695 "enable_zerocopy_send_client": false, 00:20:26.695 "zerocopy_threshold": 0, 00:20:26.695 "tls_version": 0, 00:20:26.695 "enable_ktls": false 00:20:26.695 } 00:20:26.695 } 00:20:26.695 ] 00:20:26.695 }, 00:20:26.695 { 00:20:26.695 "subsystem": "vmd", 00:20:26.695 "config": [] 00:20:26.695 }, 00:20:26.695 { 00:20:26.695 "subsystem": "accel", 00:20:26.695 "config": [ 00:20:26.695 { 00:20:26.695 "method": "accel_set_options", 00:20:26.695 "params": { 00:20:26.695 "small_cache_size": 128, 00:20:26.695 "large_cache_size": 16, 00:20:26.695 "task_count": 2048, 00:20:26.695 "sequence_count": 2048, 00:20:26.695 "buf_count": 2048 00:20:26.695 } 00:20:26.695 } 00:20:26.695 ] 00:20:26.695 }, 00:20:26.695 { 00:20:26.695 "subsystem": "bdev", 00:20:26.695 "config": [ 00:20:26.695 { 00:20:26.695 "method": "bdev_set_options", 00:20:26.695 "params": { 00:20:26.695 "bdev_io_pool_size": 65535, 00:20:26.695 "bdev_io_cache_size": 256, 00:20:26.695 "bdev_auto_examine": true, 00:20:26.695 "iobuf_small_cache_size": 128, 00:20:26.695 "iobuf_large_cache_size": 16 00:20:26.695 } 00:20:26.695 }, 00:20:26.695 { 00:20:26.695 "method": "bdev_raid_set_options", 00:20:26.695 "params": { 00:20:26.695 "process_window_size_kb": 1024, 00:20:26.695 "process_max_bandwidth_mb_sec": 0 00:20:26.695 } 00:20:26.695 }, 00:20:26.695 { 00:20:26.695 "method": "bdev_iscsi_set_options", 00:20:26.695 "params": { 00:20:26.695 "timeout_sec": 30 00:20:26.695 } 00:20:26.695 }, 00:20:26.695 { 00:20:26.695 "method": "bdev_nvme_set_options", 00:20:26.695 "params": { 00:20:26.695 "action_on_timeout": "none", 00:20:26.695 "timeout_us": 0, 00:20:26.695 "timeout_admin_us": 0, 00:20:26.695 "keep_alive_timeout_ms": 10000, 00:20:26.695 "arbitration_burst": 0, 00:20:26.695 "low_priority_weight": 0, 00:20:26.695 "medium_priority_weight": 0, 00:20:26.695 "high_priority_weight": 0, 00:20:26.695 "nvme_adminq_poll_period_us": 10000, 00:20:26.695 "nvme_ioq_poll_period_us": 0, 00:20:26.695 "io_queue_requests": 512, 00:20:26.695 "delay_cmd_submit": true, 00:20:26.695 "transport_retry_count": 4, 00:20:26.695 "bdev_retry_count": 3, 00:20:26.695 "transport_ack_timeout": 0, 00:20:26.695 "ctrlr_loss_timeout_sec": 0, 00:20:26.695 "reconnect_delay_sec": 0, 00:20:26.695 "fast_io_fail_timeout_sec": 0, 00:20:26.695 "disable_auto_failback": false, 00:20:26.695 "generate_uuids": false, 00:20:26.695 "transport_tos": 0, 00:20:26.695 "nvme_error_stat": false, 00:20:26.695 "rdma_srq_size": 0, 00:20:26.695 "io_path_stat": false, 00:20:26.695 "allow_accel_sequence": false, 00:20:26.695 "rdma_max_cq_size": 0, 00:20:26.695 "rdma_cm_event_timeout_ms": 0, 00:20:26.695 "dhchap_digests": [ 00:20:26.695 "sha256", 00:20:26.695 "sha384", 00:20:26.695 "sha512" 00:20:26.695 ], 00:20:26.695 "dhchap_dhgroups": [ 00:20:26.695 "null", 00:20:26.695 "ffdhe2048", 00:20:26.695 "ffdhe3072", 00:20:26.695 "ffdhe4096", 00:20:26.695 "ffdhe6144", 00:20:26.695 "ffdhe8192" 00:20:26.695 ] 00:20:26.695 } 00:20:26.695 }, 00:20:26.695 { 00:20:26.695 "method": "bdev_nvme_attach_controller", 00:20:26.695 "params": { 00:20:26.695 "name": "TLSTEST", 00:20:26.695 "trtype": "TCP", 00:20:26.695 "adrfam": "IPv4", 00:20:26.695 "traddr": "10.0.0.2", 00:20:26.695 "trsvcid": "4420", 00:20:26.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.695 "prchk_reftag": false, 00:20:26.695 "prchk_guard": false, 00:20:26.695 "ctrlr_loss_timeout_sec": 0, 00:20:26.695 "reconnect_delay_sec": 0, 00:20:26.695 "fast_io_fail_timeout_sec": 0, 00:20:26.696 "psk": "key0", 00:20:26.696 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:26.696 "hdgst": false, 00:20:26.696 "ddgst": false, 00:20:26.696 "multipath": "multipath" 00:20:26.696 } 00:20:26.696 }, 00:20:26.696 { 00:20:26.696 "method": "bdev_nvme_set_hotplug", 00:20:26.696 "params": { 00:20:26.696 "period_us": 100000, 00:20:26.696 "enable": false 00:20:26.696 } 00:20:26.696 }, 00:20:26.696 { 00:20:26.696 "method": "bdev_wait_for_examine" 00:20:26.696 } 00:20:26.696 ] 00:20:26.696 }, 00:20:26.696 { 00:20:26.696 "subsystem": "nbd", 00:20:26.696 "config": [] 00:20:26.696 } 00:20:26.696 ] 00:20:26.696 }' 00:20:26.696 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.696 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.696 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.956 [2024-12-06 19:19:11.766182] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:26.956 [2024-12-06 19:19:11.766269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid239203 ] 00:20:26.956 [2024-12-06 19:19:11.832989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.956 [2024-12-06 19:19:11.889009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.215 [2024-12-06 19:19:12.071716] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.215 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.215 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:27.215 19:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:27.472 Running I/O for 10 seconds... 00:20:29.343 3104.00 IOPS, 12.12 MiB/s [2024-12-06T18:19:15.330Z] 3249.50 IOPS, 12.69 MiB/s [2024-12-06T18:19:16.710Z] 3300.33 IOPS, 12.89 MiB/s [2024-12-06T18:19:17.647Z] 3359.75 IOPS, 13.12 MiB/s [2024-12-06T18:19:18.586Z] 3357.80 IOPS, 13.12 MiB/s [2024-12-06T18:19:19.525Z] 3382.33 IOPS, 13.21 MiB/s [2024-12-06T18:19:20.460Z] 3385.43 IOPS, 13.22 MiB/s [2024-12-06T18:19:21.398Z] 3409.25 IOPS, 13.32 MiB/s [2024-12-06T18:19:22.773Z] 3393.89 IOPS, 13.26 MiB/s [2024-12-06T18:19:22.773Z] 3391.70 IOPS, 13.25 MiB/s 00:20:37.724 Latency(us) 00:20:37.724 [2024-12-06T18:19:22.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.724 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:37.724 Verification LBA range: start 0x0 length 0x2000 00:20:37.724 TLSTESTn1 : 10.03 3392.70 13.25 0.00 0.00 37647.66 9806.13 36700.16 00:20:37.724 [2024-12-06T18:19:22.773Z] =================================================================================================================== 00:20:37.724 [2024-12-06T18:19:22.773Z] Total : 3392.70 13.25 0.00 0.00 37647.66 9806.13 36700.16 00:20:37.724 { 00:20:37.724 "results": [ 00:20:37.724 { 00:20:37.724 "job": "TLSTESTn1", 00:20:37.724 "core_mask": "0x4", 00:20:37.724 "workload": "verify", 00:20:37.724 "status": "finished", 00:20:37.724 "verify_range": { 00:20:37.724 "start": 0, 00:20:37.724 "length": 8192 00:20:37.724 }, 00:20:37.724 "queue_depth": 128, 00:20:37.724 "io_size": 4096, 00:20:37.724 "runtime": 10.034491, 00:20:37.724 "iops": 3392.698244484947, 00:20:37.724 "mibps": 13.252727517519324, 00:20:37.724 "io_failed": 0, 00:20:37.724 "io_timeout": 0, 00:20:37.724 "avg_latency_us": 37647.660943180286, 00:20:37.724 "min_latency_us": 9806.127407407408, 00:20:37.724 "max_latency_us": 36700.16 00:20:37.724 } 00:20:37.724 ], 00:20:37.724 "core_count": 1 00:20:37.724 } 00:20:37.724 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:37.724 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 239203 00:20:37.724 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 239203 ']' 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 239203 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 239203 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 239203' 00:20:37.725 killing process with pid 239203 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 239203 00:20:37.725 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.725 00:20:37.725 Latency(us) 00:20:37.725 [2024-12-06T18:19:22.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.725 [2024-12-06T18:19:22.774Z] =================================================================================================================== 00:20:37.725 [2024-12-06T18:19:22.774Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 239203 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 239050 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 239050 ']' 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 239050 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 239050 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 239050' 00:20:37.725 killing process with pid 239050 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 239050 00:20:37.725 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 239050 00:20:37.983 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:37.983 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:37.983 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:37.983 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.983 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=240526 00:20:37.983 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:37.983 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 240526 00:20:37.983 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 240526 ']' 00:20:37.983 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.983 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.983 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.983 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.983 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.983 [2024-12-06 19:19:22.967946] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:37.983 [2024-12-06 19:19:22.968052] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.241 [2024-12-06 19:19:23.040417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.241 [2024-12-06 19:19:23.095663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.241 [2024-12-06 19:19:23.095728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.241 [2024-12-06 19:19:23.095743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.241 [2024-12-06 19:19:23.095755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.241 [2024-12-06 19:19:23.095764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.241 [2024-12-06 19:19:23.096363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.241 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.241 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:38.241 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:38.241 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.241 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.241 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.241 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Nubxqbildy 00:20:38.241 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Nubxqbildy 00:20:38.241 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:38.501 [2024-12-06 19:19:23.475326] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.501 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:38.759 19:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:39.018 [2024-12-06 19:19:24.012791] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.018 [2024-12-06 19:19:24.013081] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.018 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:39.278 malloc0 00:20:39.278 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:39.536 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Nubxqbildy 00:20:40.100 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:40.357 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=240813 00:20:40.357 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:40.357 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:40.357 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 240813 /var/tmp/bdevperf.sock 00:20:40.357 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 240813 ']' 00:20:40.357 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.357 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.357 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.357 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.357 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.357 [2024-12-06 19:19:25.249429] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:40.357 [2024-12-06 19:19:25.249502] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid240813 ] 00:20:40.357 [2024-12-06 19:19:25.314389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.357 [2024-12-06 19:19:25.373235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.615 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.615 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:40.615 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Nubxqbildy 00:20:40.872 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:41.129 [2024-12-06 19:19:26.108037] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.388 nvme0n1 00:20:41.388 19:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:41.388 Running I/O for 1 seconds... 00:20:42.327 3323.00 IOPS, 12.98 MiB/s 00:20:42.327 Latency(us) 00:20:42.327 [2024-12-06T18:19:27.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.327 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:42.327 Verification LBA range: start 0x0 length 0x2000 00:20:42.328 nvme0n1 : 1.02 3384.53 13.22 0.00 0.00 37475.11 5873.97 53982.25 00:20:42.328 [2024-12-06T18:19:27.377Z] =================================================================================================================== 00:20:42.328 [2024-12-06T18:19:27.377Z] Total : 3384.53 13.22 0.00 0.00 37475.11 5873.97 53982.25 00:20:42.328 { 00:20:42.328 "results": [ 00:20:42.328 { 00:20:42.328 "job": "nvme0n1", 00:20:42.328 "core_mask": "0x2", 00:20:42.328 "workload": "verify", 00:20:42.328 "status": "finished", 00:20:42.328 "verify_range": { 00:20:42.328 "start": 0, 00:20:42.328 "length": 8192 00:20:42.328 }, 00:20:42.328 "queue_depth": 128, 00:20:42.328 "io_size": 4096, 00:20:42.328 "runtime": 1.019936, 00:20:42.328 "iops": 3384.5260879113985, 00:20:42.328 "mibps": 13.2208050309039, 00:20:42.328 "io_failed": 0, 00:20:42.328 "io_timeout": 0, 00:20:42.328 "avg_latency_us": 37475.11103557787, 00:20:42.328 "min_latency_us": 5873.967407407407, 00:20:42.328 "max_latency_us": 53982.24592592593 00:20:42.328 } 00:20:42.328 ], 00:20:42.328 "core_count": 1 00:20:42.328 } 00:20:42.328 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 240813 00:20:42.328 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 240813 ']' 00:20:42.328 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 240813 00:20:42.328 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:42.328 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.328 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 240813 00:20:42.585 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:42.585 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:42.585 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 240813' 00:20:42.585 killing process with pid 240813 00:20:42.585 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 240813 00:20:42.585 Received shutdown signal, test time was about 1.000000 seconds 00:20:42.585 00:20:42.585 Latency(us) 00:20:42.585 [2024-12-06T18:19:27.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.585 [2024-12-06T18:19:27.634Z] =================================================================================================================== 00:20:42.585 [2024-12-06T18:19:27.634Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:42.585 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 240813 00:20:42.585 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 240526 00:20:42.585 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 240526 ']' 00:20:42.585 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 240526 00:20:42.585 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:42.585 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.585 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 240526 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 240526' 00:20:42.844 killing process with pid 240526 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 240526 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 240526 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=241097 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 241097 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 241097 ']' 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.844 19:19:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.103 [2024-12-06 19:19:27.939698] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:43.103 [2024-12-06 19:19:27.939826] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.103 [2024-12-06 19:19:28.011932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.103 [2024-12-06 19:19:28.067414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.103 [2024-12-06 19:19:28.067472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.103 [2024-12-06 19:19:28.067486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.103 [2024-12-06 19:19:28.067497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.103 [2024-12-06 19:19:28.067506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.103 [2024-12-06 19:19:28.068140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.361 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.361 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:43.361 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.361 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.361 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.361 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.361 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:43.361 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.361 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.361 [2024-12-06 19:19:28.203033] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.361 malloc0 00:20:43.361 [2024-12-06 19:19:28.233468] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:43.362 [2024-12-06 19:19:28.233735] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.362 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.362 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=241119 00:20:43.362 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:43.362 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 241119 /var/tmp/bdevperf.sock 00:20:43.362 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 241119 ']' 00:20:43.362 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.362 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.362 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.362 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.362 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.362 [2024-12-06 19:19:28.303953] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:43.362 [2024-12-06 19:19:28.304048] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid241119 ] 00:20:43.362 [2024-12-06 19:19:28.374399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.621 [2024-12-06 19:19:28.438615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.621 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.621 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:43.621 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Nubxqbildy 00:20:43.879 19:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:44.136 [2024-12-06 19:19:29.067065] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.136 nvme0n1 00:20:44.136 19:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:44.394 Running I/O for 1 seconds... 00:20:45.332 3258.00 IOPS, 12.73 MiB/s 00:20:45.332 Latency(us) 00:20:45.332 [2024-12-06T18:19:30.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.332 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:45.332 Verification LBA range: start 0x0 length 0x2000 00:20:45.332 nvme0n1 : 1.04 3255.18 12.72 0.00 0.00 38667.70 8689.59 56312.41 00:20:45.332 [2024-12-06T18:19:30.381Z] =================================================================================================================== 00:20:45.332 [2024-12-06T18:19:30.381Z] Total : 3255.18 12.72 0.00 0.00 38667.70 8689.59 56312.41 00:20:45.332 { 00:20:45.332 "results": [ 00:20:45.332 { 00:20:45.332 "job": "nvme0n1", 00:20:45.332 "core_mask": "0x2", 00:20:45.332 "workload": "verify", 00:20:45.332 "status": "finished", 00:20:45.332 "verify_range": { 00:20:45.332 "start": 0, 00:20:45.332 "length": 8192 00:20:45.332 }, 00:20:45.332 "queue_depth": 128, 00:20:45.332 "io_size": 4096, 00:20:45.332 "runtime": 1.040188, 00:20:45.332 "iops": 3255.180794241041, 00:20:45.332 "mibps": 12.715549977504066, 00:20:45.332 "io_failed": 0, 00:20:45.332 "io_timeout": 0, 00:20:45.332 "avg_latency_us": 38667.70237710835, 00:20:45.332 "min_latency_us": 8689.588148148148, 00:20:45.332 "max_latency_us": 56312.414814814816 00:20:45.332 } 00:20:45.332 ], 00:20:45.332 "core_count": 1 00:20:45.332 } 00:20:45.332 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:45.332 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.332 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.591 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.591 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:45.591 "subsystems": [ 00:20:45.591 { 00:20:45.591 "subsystem": "keyring", 00:20:45.591 "config": [ 00:20:45.591 { 00:20:45.591 "method": "keyring_file_add_key", 00:20:45.591 "params": { 00:20:45.591 "name": "key0", 00:20:45.591 "path": "/tmp/tmp.Nubxqbildy" 00:20:45.591 } 00:20:45.591 } 00:20:45.591 ] 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "subsystem": "iobuf", 00:20:45.591 "config": [ 00:20:45.591 { 00:20:45.591 "method": "iobuf_set_options", 00:20:45.591 "params": { 00:20:45.591 "small_pool_count": 8192, 00:20:45.591 "large_pool_count": 1024, 00:20:45.591 "small_bufsize": 8192, 00:20:45.591 "large_bufsize": 135168, 00:20:45.591 "enable_numa": false 00:20:45.591 } 00:20:45.591 } 00:20:45.591 ] 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "subsystem": "sock", 00:20:45.591 "config": [ 00:20:45.591 { 00:20:45.591 "method": "sock_set_default_impl", 00:20:45.591 "params": { 00:20:45.591 "impl_name": "posix" 00:20:45.591 } 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "method": "sock_impl_set_options", 00:20:45.591 "params": { 00:20:45.591 "impl_name": "ssl", 00:20:45.591 "recv_buf_size": 4096, 00:20:45.591 "send_buf_size": 4096, 00:20:45.591 "enable_recv_pipe": true, 00:20:45.591 "enable_quickack": false, 00:20:45.591 "enable_placement_id": 0, 00:20:45.591 "enable_zerocopy_send_server": true, 00:20:45.591 "enable_zerocopy_send_client": false, 00:20:45.591 "zerocopy_threshold": 0, 00:20:45.591 "tls_version": 0, 00:20:45.591 "enable_ktls": false 00:20:45.591 } 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "method": "sock_impl_set_options", 00:20:45.591 "params": { 00:20:45.591 "impl_name": "posix", 00:20:45.591 "recv_buf_size": 2097152, 00:20:45.591 "send_buf_size": 2097152, 00:20:45.591 "enable_recv_pipe": true, 00:20:45.591 "enable_quickack": false, 00:20:45.591 "enable_placement_id": 0, 00:20:45.591 "enable_zerocopy_send_server": true, 00:20:45.591 "enable_zerocopy_send_client": false, 00:20:45.591 "zerocopy_threshold": 0, 00:20:45.591 "tls_version": 0, 00:20:45.591 "enable_ktls": false 00:20:45.591 } 00:20:45.591 } 00:20:45.591 ] 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "subsystem": "vmd", 00:20:45.591 "config": [] 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "subsystem": "accel", 00:20:45.591 "config": [ 00:20:45.591 { 00:20:45.591 "method": "accel_set_options", 00:20:45.591 "params": { 00:20:45.591 "small_cache_size": 128, 00:20:45.591 "large_cache_size": 16, 00:20:45.591 "task_count": 2048, 00:20:45.591 "sequence_count": 2048, 00:20:45.591 "buf_count": 2048 00:20:45.591 } 00:20:45.591 } 00:20:45.591 ] 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "subsystem": "bdev", 00:20:45.591 "config": [ 00:20:45.591 { 00:20:45.591 "method": "bdev_set_options", 00:20:45.591 "params": { 00:20:45.591 "bdev_io_pool_size": 65535, 00:20:45.591 "bdev_io_cache_size": 256, 00:20:45.591 "bdev_auto_examine": true, 00:20:45.591 "iobuf_small_cache_size": 128, 00:20:45.591 "iobuf_large_cache_size": 16 00:20:45.591 } 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "method": "bdev_raid_set_options", 00:20:45.591 "params": { 00:20:45.591 "process_window_size_kb": 1024, 00:20:45.591 "process_max_bandwidth_mb_sec": 0 00:20:45.591 } 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "method": "bdev_iscsi_set_options", 00:20:45.591 "params": { 00:20:45.591 "timeout_sec": 30 00:20:45.591 } 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "method": "bdev_nvme_set_options", 00:20:45.591 "params": { 00:20:45.591 "action_on_timeout": "none", 00:20:45.591 "timeout_us": 0, 00:20:45.591 "timeout_admin_us": 0, 00:20:45.591 "keep_alive_timeout_ms": 10000, 00:20:45.591 "arbitration_burst": 0, 00:20:45.591 "low_priority_weight": 0, 00:20:45.591 "medium_priority_weight": 0, 00:20:45.591 "high_priority_weight": 0, 00:20:45.591 "nvme_adminq_poll_period_us": 10000, 00:20:45.591 "nvme_ioq_poll_period_us": 0, 00:20:45.591 "io_queue_requests": 0, 00:20:45.591 "delay_cmd_submit": true, 00:20:45.591 "transport_retry_count": 4, 00:20:45.591 "bdev_retry_count": 3, 00:20:45.591 "transport_ack_timeout": 0, 00:20:45.591 "ctrlr_loss_timeout_sec": 0, 00:20:45.591 "reconnect_delay_sec": 0, 00:20:45.591 "fast_io_fail_timeout_sec": 0, 00:20:45.591 "disable_auto_failback": false, 00:20:45.591 "generate_uuids": false, 00:20:45.591 "transport_tos": 0, 00:20:45.591 "nvme_error_stat": false, 00:20:45.591 "rdma_srq_size": 0, 00:20:45.591 "io_path_stat": false, 00:20:45.591 "allow_accel_sequence": false, 00:20:45.591 "rdma_max_cq_size": 0, 00:20:45.591 "rdma_cm_event_timeout_ms": 0, 00:20:45.591 "dhchap_digests": [ 00:20:45.591 "sha256", 00:20:45.591 "sha384", 00:20:45.591 "sha512" 00:20:45.591 ], 00:20:45.591 "dhchap_dhgroups": [ 00:20:45.591 "null", 00:20:45.591 "ffdhe2048", 00:20:45.591 "ffdhe3072", 00:20:45.591 "ffdhe4096", 00:20:45.591 "ffdhe6144", 00:20:45.591 "ffdhe8192" 00:20:45.591 ] 00:20:45.591 } 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "method": "bdev_nvme_set_hotplug", 00:20:45.591 "params": { 00:20:45.591 "period_us": 100000, 00:20:45.591 "enable": false 00:20:45.591 } 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "method": "bdev_malloc_create", 00:20:45.591 "params": { 00:20:45.591 "name": "malloc0", 00:20:45.591 "num_blocks": 8192, 00:20:45.591 "block_size": 4096, 00:20:45.591 "physical_block_size": 4096, 00:20:45.591 "uuid": "ddf550f0-0d68-478c-b9f8-2798da707ff3", 00:20:45.591 "optimal_io_boundary": 0, 00:20:45.591 "md_size": 0, 00:20:45.591 "dif_type": 0, 00:20:45.591 "dif_is_head_of_md": false, 00:20:45.591 "dif_pi_format": 0 00:20:45.591 } 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "method": "bdev_wait_for_examine" 00:20:45.591 } 00:20:45.591 ] 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "subsystem": "nbd", 00:20:45.591 "config": [] 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "subsystem": "scheduler", 00:20:45.591 "config": [ 00:20:45.591 { 00:20:45.591 "method": "framework_set_scheduler", 00:20:45.591 "params": { 00:20:45.591 "name": "static" 00:20:45.591 } 00:20:45.591 } 00:20:45.591 ] 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "subsystem": "nvmf", 00:20:45.591 "config": [ 00:20:45.591 { 00:20:45.591 "method": "nvmf_set_config", 00:20:45.591 "params": { 00:20:45.591 "discovery_filter": "match_any", 00:20:45.591 "admin_cmd_passthru": { 00:20:45.591 "identify_ctrlr": false 00:20:45.591 }, 00:20:45.591 "dhchap_digests": [ 00:20:45.591 "sha256", 00:20:45.591 "sha384", 00:20:45.591 "sha512" 00:20:45.591 ], 00:20:45.591 "dhchap_dhgroups": [ 00:20:45.591 "null", 00:20:45.591 "ffdhe2048", 00:20:45.591 "ffdhe3072", 00:20:45.591 "ffdhe4096", 00:20:45.591 "ffdhe6144", 00:20:45.591 "ffdhe8192" 00:20:45.591 ] 00:20:45.591 } 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "method": "nvmf_set_max_subsystems", 00:20:45.591 "params": { 00:20:45.591 "max_subsystems": 1024 00:20:45.591 } 00:20:45.591 }, 00:20:45.591 { 00:20:45.591 "method": "nvmf_set_crdt", 00:20:45.592 "params": { 00:20:45.592 "crdt1": 0, 00:20:45.592 "crdt2": 0, 00:20:45.592 "crdt3": 0 00:20:45.592 } 00:20:45.592 }, 00:20:45.592 { 00:20:45.592 "method": "nvmf_create_transport", 00:20:45.592 "params": { 00:20:45.592 "trtype": "TCP", 00:20:45.592 "max_queue_depth": 128, 00:20:45.592 "max_io_qpairs_per_ctrlr": 127, 00:20:45.592 "in_capsule_data_size": 4096, 00:20:45.592 "max_io_size": 131072, 00:20:45.592 "io_unit_size": 131072, 00:20:45.592 "max_aq_depth": 128, 00:20:45.592 "num_shared_buffers": 511, 00:20:45.592 "buf_cache_size": 4294967295, 00:20:45.592 "dif_insert_or_strip": false, 00:20:45.592 "zcopy": false, 00:20:45.592 "c2h_success": false, 00:20:45.592 "sock_priority": 0, 00:20:45.592 "abort_timeout_sec": 1, 00:20:45.592 "ack_timeout": 0, 00:20:45.592 "data_wr_pool_size": 0 00:20:45.592 } 00:20:45.592 }, 00:20:45.592 { 00:20:45.592 "method": "nvmf_create_subsystem", 00:20:45.592 "params": { 00:20:45.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.592 "allow_any_host": false, 00:20:45.592 "serial_number": "00000000000000000000", 00:20:45.592 "model_number": "SPDK bdev Controller", 00:20:45.592 "max_namespaces": 32, 00:20:45.592 "min_cntlid": 1, 00:20:45.592 "max_cntlid": 65519, 00:20:45.592 "ana_reporting": false 00:20:45.592 } 00:20:45.592 }, 00:20:45.592 { 00:20:45.592 "method": "nvmf_subsystem_add_host", 00:20:45.592 "params": { 00:20:45.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.592 "host": "nqn.2016-06.io.spdk:host1", 00:20:45.592 "psk": "key0" 00:20:45.592 } 00:20:45.592 }, 00:20:45.592 { 00:20:45.592 "method": "nvmf_subsystem_add_ns", 00:20:45.592 "params": { 00:20:45.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.592 "namespace": { 00:20:45.592 "nsid": 1, 00:20:45.592 "bdev_name": "malloc0", 00:20:45.592 "nguid": "DDF550F00D68478CB9F82798DA707FF3", 00:20:45.592 "uuid": "ddf550f0-0d68-478c-b9f8-2798da707ff3", 00:20:45.592 "no_auto_visible": false 00:20:45.592 } 00:20:45.592 } 00:20:45.592 }, 00:20:45.592 { 00:20:45.592 "method": "nvmf_subsystem_add_listener", 00:20:45.592 "params": { 00:20:45.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.592 "listen_address": { 00:20:45.592 "trtype": "TCP", 00:20:45.592 "adrfam": "IPv4", 00:20:45.592 "traddr": "10.0.0.2", 00:20:45.592 "trsvcid": "4420" 00:20:45.592 }, 00:20:45.592 "secure_channel": false, 00:20:45.592 "sock_impl": "ssl" 00:20:45.592 } 00:20:45.592 } 00:20:45.592 ] 00:20:45.592 } 00:20:45.592 ] 00:20:45.592 }' 00:20:45.592 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:45.850 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:45.850 "subsystems": [ 00:20:45.850 { 00:20:45.850 "subsystem": "keyring", 00:20:45.850 "config": [ 00:20:45.850 { 00:20:45.850 "method": "keyring_file_add_key", 00:20:45.850 "params": { 00:20:45.850 "name": "key0", 00:20:45.850 "path": "/tmp/tmp.Nubxqbildy" 00:20:45.850 } 00:20:45.850 } 00:20:45.850 ] 00:20:45.850 }, 00:20:45.850 { 00:20:45.850 "subsystem": "iobuf", 00:20:45.850 "config": [ 00:20:45.850 { 00:20:45.850 "method": "iobuf_set_options", 00:20:45.850 "params": { 00:20:45.850 "small_pool_count": 8192, 00:20:45.850 "large_pool_count": 1024, 00:20:45.850 "small_bufsize": 8192, 00:20:45.850 "large_bufsize": 135168, 00:20:45.850 "enable_numa": false 00:20:45.850 } 00:20:45.850 } 00:20:45.850 ] 00:20:45.850 }, 00:20:45.850 { 00:20:45.850 "subsystem": "sock", 00:20:45.850 "config": [ 00:20:45.850 { 00:20:45.850 "method": "sock_set_default_impl", 00:20:45.850 "params": { 00:20:45.850 "impl_name": "posix" 00:20:45.850 } 00:20:45.850 }, 00:20:45.850 { 00:20:45.850 "method": "sock_impl_set_options", 00:20:45.850 "params": { 00:20:45.850 "impl_name": "ssl", 00:20:45.850 "recv_buf_size": 4096, 00:20:45.850 "send_buf_size": 4096, 00:20:45.850 "enable_recv_pipe": true, 00:20:45.850 "enable_quickack": false, 00:20:45.850 "enable_placement_id": 0, 00:20:45.850 "enable_zerocopy_send_server": true, 00:20:45.850 "enable_zerocopy_send_client": false, 00:20:45.850 "zerocopy_threshold": 0, 00:20:45.850 "tls_version": 0, 00:20:45.850 "enable_ktls": false 00:20:45.850 } 00:20:45.850 }, 00:20:45.850 { 00:20:45.850 "method": "sock_impl_set_options", 00:20:45.850 "params": { 00:20:45.850 "impl_name": "posix", 00:20:45.850 "recv_buf_size": 2097152, 00:20:45.850 "send_buf_size": 2097152, 00:20:45.850 "enable_recv_pipe": true, 00:20:45.850 "enable_quickack": false, 00:20:45.850 "enable_placement_id": 0, 00:20:45.850 "enable_zerocopy_send_server": true, 00:20:45.850 "enable_zerocopy_send_client": false, 00:20:45.850 "zerocopy_threshold": 0, 00:20:45.850 "tls_version": 0, 00:20:45.850 "enable_ktls": false 00:20:45.850 } 00:20:45.850 } 00:20:45.850 ] 00:20:45.850 }, 00:20:45.850 { 00:20:45.850 "subsystem": "vmd", 00:20:45.850 "config": [] 00:20:45.850 }, 00:20:45.850 { 00:20:45.850 "subsystem": "accel", 00:20:45.850 "config": [ 00:20:45.850 { 00:20:45.850 "method": "accel_set_options", 00:20:45.850 "params": { 00:20:45.850 "small_cache_size": 128, 00:20:45.850 "large_cache_size": 16, 00:20:45.850 "task_count": 2048, 00:20:45.850 "sequence_count": 2048, 00:20:45.850 "buf_count": 2048 00:20:45.850 } 00:20:45.850 } 00:20:45.850 ] 00:20:45.850 }, 00:20:45.850 { 00:20:45.850 "subsystem": "bdev", 00:20:45.850 "config": [ 00:20:45.850 { 00:20:45.850 "method": "bdev_set_options", 00:20:45.850 "params": { 00:20:45.850 "bdev_io_pool_size": 65535, 00:20:45.850 "bdev_io_cache_size": 256, 00:20:45.850 "bdev_auto_examine": true, 00:20:45.850 "iobuf_small_cache_size": 128, 00:20:45.850 "iobuf_large_cache_size": 16 00:20:45.850 } 00:20:45.850 }, 00:20:45.850 { 00:20:45.850 "method": "bdev_raid_set_options", 00:20:45.850 "params": { 00:20:45.850 "process_window_size_kb": 1024, 00:20:45.850 "process_max_bandwidth_mb_sec": 0 00:20:45.850 } 00:20:45.850 }, 00:20:45.850 { 00:20:45.850 "method": "bdev_iscsi_set_options", 00:20:45.850 "params": { 00:20:45.850 "timeout_sec": 30 00:20:45.850 } 00:20:45.850 }, 00:20:45.850 { 00:20:45.850 "method": "bdev_nvme_set_options", 00:20:45.850 "params": { 00:20:45.850 "action_on_timeout": "none", 00:20:45.850 "timeout_us": 0, 00:20:45.850 "timeout_admin_us": 0, 00:20:45.850 "keep_alive_timeout_ms": 10000, 00:20:45.850 "arbitration_burst": 0, 00:20:45.850 "low_priority_weight": 0, 00:20:45.850 "medium_priority_weight": 0, 00:20:45.850 "high_priority_weight": 0, 00:20:45.850 "nvme_adminq_poll_period_us": 10000, 00:20:45.850 "nvme_ioq_poll_period_us": 0, 00:20:45.850 "io_queue_requests": 512, 00:20:45.850 "delay_cmd_submit": true, 00:20:45.851 "transport_retry_count": 4, 00:20:45.851 "bdev_retry_count": 3, 00:20:45.851 "transport_ack_timeout": 0, 00:20:45.851 "ctrlr_loss_timeout_sec": 0, 00:20:45.851 "reconnect_delay_sec": 0, 00:20:45.851 "fast_io_fail_timeout_sec": 0, 00:20:45.851 "disable_auto_failback": false, 00:20:45.851 "generate_uuids": false, 00:20:45.851 "transport_tos": 0, 00:20:45.851 "nvme_error_stat": false, 00:20:45.851 "rdma_srq_size": 0, 00:20:45.851 "io_path_stat": false, 00:20:45.851 "allow_accel_sequence": false, 00:20:45.851 "rdma_max_cq_size": 0, 00:20:45.851 "rdma_cm_event_timeout_ms": 0, 00:20:45.851 "dhchap_digests": [ 00:20:45.851 "sha256", 00:20:45.851 "sha384", 00:20:45.851 "sha512" 00:20:45.851 ], 00:20:45.851 "dhchap_dhgroups": [ 00:20:45.851 "null", 00:20:45.851 "ffdhe2048", 00:20:45.851 "ffdhe3072", 00:20:45.851 "ffdhe4096", 00:20:45.851 "ffdhe6144", 00:20:45.851 "ffdhe8192" 00:20:45.851 ] 00:20:45.851 } 00:20:45.851 }, 00:20:45.851 { 00:20:45.851 "method": "bdev_nvme_attach_controller", 00:20:45.851 "params": { 00:20:45.851 "name": "nvme0", 00:20:45.851 "trtype": "TCP", 00:20:45.851 "adrfam": "IPv4", 00:20:45.851 "traddr": "10.0.0.2", 00:20:45.851 "trsvcid": "4420", 00:20:45.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.851 "prchk_reftag": false, 00:20:45.851 "prchk_guard": false, 00:20:45.851 "ctrlr_loss_timeout_sec": 0, 00:20:45.851 "reconnect_delay_sec": 0, 00:20:45.851 "fast_io_fail_timeout_sec": 0, 00:20:45.851 "psk": "key0", 00:20:45.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.851 "hdgst": false, 00:20:45.851 "ddgst": false, 00:20:45.851 "multipath": "multipath" 00:20:45.851 } 00:20:45.851 }, 00:20:45.851 { 00:20:45.851 "method": "bdev_nvme_set_hotplug", 00:20:45.851 "params": { 00:20:45.851 "period_us": 100000, 00:20:45.851 "enable": false 00:20:45.851 } 00:20:45.851 }, 00:20:45.851 { 00:20:45.851 "method": "bdev_enable_histogram", 00:20:45.851 "params": { 00:20:45.851 "name": "nvme0n1", 00:20:45.851 "enable": true 00:20:45.851 } 00:20:45.851 }, 00:20:45.851 { 00:20:45.851 "method": "bdev_wait_for_examine" 00:20:45.851 } 00:20:45.851 ] 00:20:45.851 }, 00:20:45.851 { 00:20:45.851 "subsystem": "nbd", 00:20:45.851 "config": [] 00:20:45.851 } 00:20:45.851 ] 00:20:45.851 }' 00:20:45.851 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 241119 00:20:45.851 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 241119 ']' 00:20:45.851 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 241119 00:20:45.851 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:45.851 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.851 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 241119 00:20:45.851 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:45.851 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:45.851 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 241119' 00:20:45.851 killing process with pid 241119 00:20:45.851 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 241119 00:20:45.851 Received shutdown signal, test time was about 1.000000 seconds 00:20:45.851 00:20:45.851 Latency(us) 00:20:45.851 [2024-12-06T18:19:30.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.851 [2024-12-06T18:19:30.900Z] =================================================================================================================== 00:20:45.851 [2024-12-06T18:19:30.900Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:45.851 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 241119 00:20:46.110 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 241097 00:20:46.110 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 241097 ']' 00:20:46.110 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 241097 00:20:46.110 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:46.110 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.110 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 241097 00:20:46.110 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.110 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.110 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 241097' 00:20:46.110 killing process with pid 241097 00:20:46.110 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 241097 00:20:46.110 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 241097 00:20:46.369 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:46.369 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:46.369 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:46.369 "subsystems": [ 00:20:46.369 { 00:20:46.369 "subsystem": "keyring", 00:20:46.369 "config": [ 00:20:46.369 { 00:20:46.369 "method": "keyring_file_add_key", 00:20:46.369 "params": { 00:20:46.369 "name": "key0", 00:20:46.369 "path": "/tmp/tmp.Nubxqbildy" 00:20:46.369 } 00:20:46.369 } 00:20:46.369 ] 00:20:46.369 }, 00:20:46.369 { 00:20:46.369 "subsystem": "iobuf", 00:20:46.369 "config": [ 00:20:46.369 { 00:20:46.369 "method": "iobuf_set_options", 00:20:46.369 "params": { 00:20:46.369 "small_pool_count": 8192, 00:20:46.369 "large_pool_count": 1024, 00:20:46.369 "small_bufsize": 8192, 00:20:46.369 "large_bufsize": 135168, 00:20:46.369 "enable_numa": false 00:20:46.369 } 00:20:46.369 } 00:20:46.369 ] 00:20:46.369 }, 00:20:46.369 { 00:20:46.369 "subsystem": "sock", 00:20:46.369 "config": [ 00:20:46.369 { 00:20:46.369 "method": "sock_set_default_impl", 00:20:46.369 "params": { 00:20:46.369 "impl_name": "posix" 00:20:46.369 } 00:20:46.369 }, 00:20:46.369 { 00:20:46.369 "method": "sock_impl_set_options", 00:20:46.369 "params": { 00:20:46.369 "impl_name": "ssl", 00:20:46.369 "recv_buf_size": 4096, 00:20:46.369 "send_buf_size": 4096, 00:20:46.369 "enable_recv_pipe": true, 00:20:46.369 "enable_quickack": false, 00:20:46.369 "enable_placement_id": 0, 00:20:46.369 "enable_zerocopy_send_server": true, 00:20:46.369 "enable_zerocopy_send_client": false, 00:20:46.369 "zerocopy_threshold": 0, 00:20:46.369 "tls_version": 0, 00:20:46.369 "enable_ktls": false 00:20:46.369 } 00:20:46.369 }, 00:20:46.369 { 00:20:46.369 "method": "sock_impl_set_options", 00:20:46.369 "params": { 00:20:46.369 "impl_name": "posix", 00:20:46.369 "recv_buf_size": 2097152, 00:20:46.369 "send_buf_size": 2097152, 00:20:46.369 "enable_recv_pipe": true, 00:20:46.369 "enable_quickack": false, 00:20:46.369 "enable_placement_id": 0, 00:20:46.369 "enable_zerocopy_send_server": true, 00:20:46.369 "enable_zerocopy_send_client": false, 00:20:46.369 "zerocopy_threshold": 0, 00:20:46.369 "tls_version": 0, 00:20:46.369 "enable_ktls": false 00:20:46.369 } 00:20:46.369 } 00:20:46.369 ] 00:20:46.369 }, 00:20:46.369 { 00:20:46.369 "subsystem": "vmd", 00:20:46.369 "config": [] 00:20:46.369 }, 00:20:46.369 { 00:20:46.369 "subsystem": "accel", 00:20:46.369 "config": [ 00:20:46.369 { 00:20:46.369 "method": "accel_set_options", 00:20:46.369 "params": { 00:20:46.369 "small_cache_size": 128, 00:20:46.369 "large_cache_size": 16, 00:20:46.369 "task_count": 2048, 00:20:46.369 "sequence_count": 2048, 00:20:46.369 "buf_count": 2048 00:20:46.369 } 00:20:46.369 } 00:20:46.369 ] 00:20:46.369 }, 00:20:46.369 { 00:20:46.369 "subsystem": "bdev", 00:20:46.369 "config": [ 00:20:46.369 { 00:20:46.369 "method": "bdev_set_options", 00:20:46.369 "params": { 00:20:46.369 "bdev_io_pool_size": 65535, 00:20:46.369 "bdev_io_cache_size": 256, 00:20:46.369 "bdev_auto_examine": true, 00:20:46.369 "iobuf_small_cache_size": 128, 00:20:46.369 "iobuf_large_cache_size": 16 00:20:46.369 } 00:20:46.369 }, 00:20:46.369 { 00:20:46.369 "method": "bdev_raid_set_options", 00:20:46.369 "params": { 00:20:46.369 "process_window_size_kb": 1024, 00:20:46.369 "process_max_bandwidth_mb_sec": 0 00:20:46.369 } 00:20:46.369 }, 00:20:46.369 { 00:20:46.369 "method": "bdev_iscsi_set_options", 00:20:46.369 "params": { 00:20:46.369 "timeout_sec": 30 00:20:46.369 } 00:20:46.369 }, 00:20:46.369 { 00:20:46.369 "method": "bdev_nvme_set_options", 00:20:46.369 "params": { 00:20:46.369 "action_on_timeout": "none", 00:20:46.369 "timeout_us": 0, 00:20:46.370 "timeout_admin_us": 0, 00:20:46.370 "keep_alive_timeout_ms": 10000, 00:20:46.370 "arbitration_burst": 0, 00:20:46.370 "low_priority_weight": 0, 00:20:46.370 "medium_priority_weight": 0, 00:20:46.370 "high_priority_weight": 0, 00:20:46.370 "nvme_adminq_poll_period_us": 10000, 00:20:46.370 "nvme_ioq_poll_period_us": 0, 00:20:46.370 "io_queue_requests": 0, 00:20:46.370 "delay_cmd_submit": true, 00:20:46.370 "transport_retry_count": 4, 00:20:46.370 "bdev_retry_count": 3, 00:20:46.370 "transport_ack_timeout": 0, 00:20:46.370 "ctrlr_loss_timeout_sec": 0, 00:20:46.370 "reconnect_delay_sec": 0, 00:20:46.370 "fast_io_fail_timeout_sec": 0, 00:20:46.370 "disable_auto_failback": false, 00:20:46.370 "generate_uuids": false, 00:20:46.370 "transport_tos": 0, 00:20:46.370 "nvme_error_stat": false, 00:20:46.370 "rdma_srq_size": 0, 00:20:46.370 "io_path_stat": false, 00:20:46.370 "allow_accel_sequence": false, 00:20:46.370 "rdma_max_cq_size": 0, 00:20:46.370 "rdma_cm_event_timeout_ms": 0, 00:20:46.370 "dhchap_digests": [ 00:20:46.370 "sha256", 00:20:46.370 "sha384", 00:20:46.370 "sha512" 00:20:46.370 ], 00:20:46.370 "dhchap_dhgroups": [ 00:20:46.370 "null", 00:20:46.370 "ffdhe2048", 00:20:46.370 "ffdhe3072", 00:20:46.370 "ffdhe4096", 00:20:46.370 "ffdhe6144", 00:20:46.370 "ffdhe8192" 00:20:46.370 ] 00:20:46.370 } 00:20:46.370 }, 00:20:46.370 { 00:20:46.370 "method": "bdev_nvme_set_hotplug", 00:20:46.370 "params": { 00:20:46.370 "period_us": 100000, 00:20:46.370 "enable": false 00:20:46.370 } 00:20:46.370 }, 00:20:46.370 { 00:20:46.370 "method": "bdev_malloc_create", 00:20:46.370 "params": { 00:20:46.370 "name": "malloc0", 00:20:46.370 "num_blocks": 8192, 00:20:46.370 "block_size": 4096, 00:20:46.370 "physical_block_size": 4096, 00:20:46.370 "uuid": "ddf550f0-0d68-478c-b9f8-2798da707ff3", 00:20:46.370 "optimal_io_boundary": 0, 00:20:46.370 "md_size": 0, 00:20:46.370 "dif_type": 0, 00:20:46.370 "dif_is_head_of_md": false, 00:20:46.370 "dif_pi_format": 0 00:20:46.370 } 00:20:46.370 }, 00:20:46.370 { 00:20:46.370 "method": "bdev_wait_for_examine" 00:20:46.370 } 00:20:46.370 ] 00:20:46.370 }, 00:20:46.370 { 00:20:46.370 "subsystem": "nbd", 00:20:46.370 "config": [] 00:20:46.370 }, 00:20:46.370 { 00:20:46.370 "subsystem": "scheduler", 00:20:46.370 "config": [ 00:20:46.370 { 00:20:46.370 "method": "framework_set_scheduler", 00:20:46.370 "params": { 00:20:46.370 "name": "static" 00:20:46.370 } 00:20:46.370 } 00:20:46.370 ] 00:20:46.370 }, 00:20:46.370 { 00:20:46.370 "subsystem": "nvmf", 00:20:46.370 "config": [ 00:20:46.370 { 00:20:46.370 "method": "nvmf_set_config", 00:20:46.370 "params": { 00:20:46.370 "discovery_filter": "match_any", 00:20:46.370 "admin_cmd_passthru": { 00:20:46.370 "identify_ctrlr": false 00:20:46.370 }, 00:20:46.370 "dhchap_digests": [ 00:20:46.370 "sha256", 00:20:46.370 "sha384", 00:20:46.370 "sha512" 00:20:46.370 ], 00:20:46.370 "dhchap_dhgroups": [ 00:20:46.370 "null", 00:20:46.370 "ffdhe2048", 00:20:46.370 "ffdhe3072", 00:20:46.370 "ffdhe4096", 00:20:46.370 "ffdhe6144", 00:20:46.370 "ffdhe8192" 00:20:46.370 ] 00:20:46.370 } 00:20:46.370 }, 00:20:46.370 { 00:20:46.370 "method": "nvmf_set_max_subsystems", 00:20:46.370 "params": { 00:20:46.370 "max_subsystems": 1024 00:20:46.370 } 00:20:46.370 }, 00:20:46.370 { 00:20:46.370 "method": "nvmf_set_crdt", 00:20:46.370 "params": { 00:20:46.370 "crdt1": 0, 00:20:46.370 "crdt2": 0, 00:20:46.370 "crdt3": 0 00:20:46.370 } 00:20:46.370 }, 00:20:46.370 { 00:20:46.370 "method": "nvmf_create_transport", 00:20:46.370 "params": { 00:20:46.370 "trtype": "TCP", 00:20:46.370 "max_queue_depth": 128, 00:20:46.370 "max_io_qpairs_per_ctrlr": 127, 00:20:46.370 "in_capsule_data_size": 4096, 00:20:46.370 "max_io_size": 131072, 00:20:46.370 "io_unit_size": 131072, 00:20:46.370 "max_aq_depth": 128, 00:20:46.370 "num_shared_buffers": 511, 00:20:46.370 "buf_cache_size": 4294967295, 00:20:46.370 "dif_insert_or_strip": false, 00:20:46.370 "zcopy": false, 00:20:46.370 "c2h_success": false, 00:20:46.370 "sock_priority": 0, 00:20:46.370 "abort_timeout_sec": 1, 00:20:46.370 "ack_timeout": 0, 00:20:46.370 "data_wr_pool_size": 0 00:20:46.370 } 00:20:46.370 }, 00:20:46.370 { 00:20:46.370 "method": "nvmf_create_subsystem", 00:20:46.370 "params": { 00:20:46.370 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.370 "allow_any_host": false, 00:20:46.370 "serial_number": "00000000000000000000", 00:20:46.370 "model_number": "SPDK bdev Controller", 00:20:46.370 "max_namespaces": 32, 00:20:46.370 "min_cntlid": 1, 00:20:46.370 "max_cntlid": 65519, 00:20:46.370 "ana_reporting": false 00:20:46.370 } 00:20:46.370 }, 00:20:46.370 { 00:20:46.370 "method": "nvmf_subsystem_add_host", 00:20:46.370 "params": { 00:20:46.370 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.370 "host": "nqn.2016-06.io.spdk:host1", 00:20:46.370 "psk": "key0" 00:20:46.370 } 00:20:46.370 }, 00:20:46.370 { 00:20:46.370 "method": "nvmf_subsystem_add_ns", 00:20:46.370 "params": { 00:20:46.370 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.370 "namespace": { 00:20:46.370 "nsid": 1, 00:20:46.370 "bdev_name": "malloc0", 00:20:46.370 "nguid": "DDF550F00D68478CB9F82798DA707FF3", 00:20:46.370 "uuid": "ddf550f0-0d68-478c-b9f8-2798da707ff3", 00:20:46.370 "no_auto_visible": false 00:20:46.370 } 00:20:46.370 } 00:20:46.370 }, 00:20:46.370 { 00:20:46.370 "method": "nvmf_subsystem_add_listener", 00:20:46.370 "params": { 00:20:46.370 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.370 "listen_address": { 00:20:46.370 "trtype": "TCP", 00:20:46.370 "adrfam": "IPv4", 00:20:46.370 "traddr": "10.0.0.2", 00:20:46.370 "trsvcid": "4420" 00:20:46.370 }, 00:20:46.370 "secure_channel": false, 00:20:46.370 "sock_impl": "ssl" 00:20:46.370 } 00:20:46.370 } 00:20:46.370 ] 00:20:46.370 } 00:20:46.370 ] 00:20:46.370 }' 00:20:46.370 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:46.370 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.370 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=241529 00:20:46.370 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:46.370 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 241529 00:20:46.370 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 241529 ']' 00:20:46.370 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.370 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.370 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.370 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.370 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.370 [2024-12-06 19:19:31.416365] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:46.370 [2024-12-06 19:19:31.416450] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.629 [2024-12-06 19:19:31.501264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.629 [2024-12-06 19:19:31.560102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.629 [2024-12-06 19:19:31.560167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.629 [2024-12-06 19:19:31.560180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.629 [2024-12-06 19:19:31.560192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.629 [2024-12-06 19:19:31.560201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.629 [2024-12-06 19:19:31.560907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.889 [2024-12-06 19:19:31.803184] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.889 [2024-12-06 19:19:31.835179] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:46.889 [2024-12-06 19:19:31.835462] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.455 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.455 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:47.455 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:47.455 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.455 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.455 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.455 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=241683 00:20:47.455 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 241683 /var/tmp/bdevperf.sock 00:20:47.455 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 241683 ']' 00:20:47.455 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.455 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:47.455 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.455 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:47.455 "subsystems": [ 00:20:47.455 { 00:20:47.455 "subsystem": "keyring", 00:20:47.455 "config": [ 00:20:47.455 { 00:20:47.455 "method": "keyring_file_add_key", 00:20:47.455 "params": { 00:20:47.455 "name": "key0", 00:20:47.455 "path": "/tmp/tmp.Nubxqbildy" 00:20:47.455 } 00:20:47.455 } 00:20:47.455 ] 00:20:47.455 }, 00:20:47.455 { 00:20:47.455 "subsystem": "iobuf", 00:20:47.455 "config": [ 00:20:47.455 { 00:20:47.455 "method": "iobuf_set_options", 00:20:47.455 "params": { 00:20:47.455 "small_pool_count": 8192, 00:20:47.455 "large_pool_count": 1024, 00:20:47.455 "small_bufsize": 8192, 00:20:47.455 "large_bufsize": 135168, 00:20:47.456 "enable_numa": false 00:20:47.456 } 00:20:47.456 } 00:20:47.456 ] 00:20:47.456 }, 00:20:47.456 { 00:20:47.456 "subsystem": "sock", 00:20:47.456 "config": [ 00:20:47.456 { 00:20:47.456 "method": "sock_set_default_impl", 00:20:47.456 "params": { 00:20:47.456 "impl_name": "posix" 00:20:47.456 } 00:20:47.456 }, 00:20:47.456 { 00:20:47.456 "method": "sock_impl_set_options", 00:20:47.456 "params": { 00:20:47.456 "impl_name": "ssl", 00:20:47.456 "recv_buf_size": 4096, 00:20:47.456 "send_buf_size": 4096, 00:20:47.456 "enable_recv_pipe": true, 00:20:47.456 "enable_quickack": false, 00:20:47.456 "enable_placement_id": 0, 00:20:47.456 "enable_zerocopy_send_server": true, 00:20:47.456 "enable_zerocopy_send_client": false, 00:20:47.456 "zerocopy_threshold": 0, 00:20:47.456 "tls_version": 0, 00:20:47.456 "enable_ktls": false 00:20:47.456 } 00:20:47.456 }, 00:20:47.456 { 00:20:47.456 "method": "sock_impl_set_options", 00:20:47.456 "params": { 00:20:47.456 "impl_name": "posix", 00:20:47.456 "recv_buf_size": 2097152, 00:20:47.456 "send_buf_size": 2097152, 00:20:47.456 "enable_recv_pipe": true, 00:20:47.456 "enable_quickack": false, 00:20:47.456 "enable_placement_id": 0, 00:20:47.456 "enable_zerocopy_send_server": true, 00:20:47.456 "enable_zerocopy_send_client": false, 00:20:47.456 "zerocopy_threshold": 0, 00:20:47.456 "tls_version": 0, 00:20:47.456 "enable_ktls": false 00:20:47.456 } 00:20:47.456 } 00:20:47.456 ] 00:20:47.456 }, 00:20:47.456 { 00:20:47.456 "subsystem": "vmd", 00:20:47.456 "config": [] 00:20:47.456 }, 00:20:47.456 { 00:20:47.456 "subsystem": "accel", 00:20:47.456 "config": [ 00:20:47.456 { 00:20:47.456 "method": "accel_set_options", 00:20:47.456 "params": { 00:20:47.456 "small_cache_size": 128, 00:20:47.456 "large_cache_size": 16, 00:20:47.456 "task_count": 2048, 00:20:47.456 "sequence_count": 2048, 00:20:47.456 "buf_count": 2048 00:20:47.456 } 00:20:47.456 } 00:20:47.456 ] 00:20:47.456 }, 00:20:47.456 { 00:20:47.456 "subsystem": "bdev", 00:20:47.456 "config": [ 00:20:47.456 { 00:20:47.456 "method": "bdev_set_options", 00:20:47.456 "params": { 00:20:47.456 "bdev_io_pool_size": 65535, 00:20:47.456 "bdev_io_cache_size": 256, 00:20:47.456 "bdev_auto_examine": true, 00:20:47.456 "iobuf_small_cache_size": 128, 00:20:47.456 "iobuf_large_cache_size": 16 00:20:47.456 } 00:20:47.456 }, 00:20:47.456 { 00:20:47.456 "method": "bdev_raid_set_options", 00:20:47.456 "params": { 00:20:47.456 "process_window_size_kb": 1024, 00:20:47.456 "process_max_bandwidth_mb_sec": 0 00:20:47.456 } 00:20:47.456 }, 00:20:47.456 { 00:20:47.456 "method": "bdev_iscsi_set_options", 00:20:47.456 "params": { 00:20:47.456 "timeout_sec": 30 00:20:47.456 } 00:20:47.456 }, 00:20:47.456 { 00:20:47.456 "method": "bdev_nvme_set_options", 00:20:47.456 "params": { 00:20:47.456 "action_on_timeout": "none", 00:20:47.456 "timeout_us": 0, 00:20:47.456 "timeout_admin_us": 0, 00:20:47.456 "keep_alive_timeout_ms": 10000, 00:20:47.456 "arbitration_burst": 0, 00:20:47.456 "low_priority_weight": 0, 00:20:47.456 "medium_priority_weight": 0, 00:20:47.456 "high_priority_weight": 0, 00:20:47.456 "nvme_adminq_poll_period_us": 10000, 00:20:47.456 "nvme_ioq_poll_period_us": 0, 00:20:47.456 "io_queue_requests": 512, 00:20:47.456 "delay_cmd_submit": true, 00:20:47.456 "transport_retry_count": 4, 00:20:47.456 "bdev_retry_count": 3, 00:20:47.456 "transport_ack_timeout": 0, 00:20:47.456 "ctrlr_loss_timeout_sec": 0, 00:20:47.456 "reconnect_delay_sec": 0, 00:20:47.456 "fast_io_fail_timeout_sec": 0, 00:20:47.456 "disable_auto_failback": false, 00:20:47.456 "generate_uuids": false, 00:20:47.456 "transport_tos": 0, 00:20:47.456 "nvme_error_stat": false, 00:20:47.456 "rdma_srq_size": 0, 00:20:47.456 "io_path_stat": false, 00:20:47.456 "allow_accel_sequence": false, 00:20:47.456 "rdma_max_cq_size": 0, 00:20:47.456 "rdma_cm_event_timeout_ms": 0 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.456 , 00:20:47.456 "dhchap_digests": [ 00:20:47.456 "sha256", 00:20:47.456 "sha384", 00:20:47.456 "sha512" 00:20:47.456 ], 00:20:47.456 "dhchap_dhgroups": [ 00:20:47.456 "null", 00:20:47.456 "ffdhe2048", 00:20:47.456 "ffdhe3072", 00:20:47.456 "ffdhe4096", 00:20:47.456 "ffdhe6144", 00:20:47.456 "ffdhe8192" 00:20:47.456 ] 00:20:47.456 } 00:20:47.456 }, 00:20:47.456 { 00:20:47.456 "method": "bdev_nvme_attach_controller", 00:20:47.456 "params": { 00:20:47.456 "name": "nvme0", 00:20:47.456 "trtype": "TCP", 00:20:47.456 "adrfam": "IPv4", 00:20:47.456 "traddr": "10.0.0.2", 00:20:47.456 "trsvcid": "4420", 00:20:47.456 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.456 "prchk_reftag": false, 00:20:47.456 "prchk_guard": false, 00:20:47.456 "ctrlr_loss_timeout_sec": 0, 00:20:47.456 "reconnect_delay_sec": 0, 00:20:47.456 "fast_io_fail_timeout_sec": 0, 00:20:47.456 "psk": "key0", 00:20:47.456 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.456 "hdgst": false, 00:20:47.456 "ddgst": false, 00:20:47.456 "multipath": "multipath" 00:20:47.456 } 00:20:47.456 }, 00:20:47.456 { 00:20:47.456 "method": "bdev_nvme_set_hotplug", 00:20:47.456 "params": { 00:20:47.456 "period_us": 100000, 00:20:47.456 "enable": false 00:20:47.456 } 00:20:47.456 }, 00:20:47.456 { 00:20:47.456 "method": "bdev_enable_histogram", 00:20:47.456 "params": { 00:20:47.456 "name": "nvme0n1", 00:20:47.456 "enable": true 00:20:47.456 } 00:20:47.456 }, 00:20:47.456 { 00:20:47.456 "method": "bdev_wait_for_examine" 00:20:47.456 } 00:20:47.456 ] 00:20:47.456 }, 00:20:47.456 { 00:20:47.456 "subsystem": "nbd", 00:20:47.456 "config": [] 00:20:47.456 } 00:20:47.456 ] 00:20:47.456 }' 00:20:47.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.456 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.456 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.715 [2024-12-06 19:19:32.532180] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:47.715 [2024-12-06 19:19:32.532266] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid241683 ] 00:20:47.715 [2024-12-06 19:19:32.601731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.715 [2024-12-06 19:19:32.657986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.976 [2024-12-06 19:19:32.839987] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.543 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.543 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:48.543 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:48.543 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:48.801 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.801 19:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:49.061 Running I/O for 1 seconds... 00:20:49.997 3307.00 IOPS, 12.92 MiB/s 00:20:49.997 Latency(us) 00:20:49.997 [2024-12-06T18:19:35.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.997 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:49.997 Verification LBA range: start 0x0 length 0x2000 00:20:49.997 nvme0n1 : 1.02 3359.62 13.12 0.00 0.00 37702.42 10388.67 43884.85 00:20:49.997 [2024-12-06T18:19:35.046Z] =================================================================================================================== 00:20:49.997 [2024-12-06T18:19:35.046Z] Total : 3359.62 13.12 0.00 0.00 37702.42 10388.67 43884.85 00:20:49.997 { 00:20:49.997 "results": [ 00:20:49.997 { 00:20:49.997 "job": "nvme0n1", 00:20:49.997 "core_mask": "0x2", 00:20:49.997 "workload": "verify", 00:20:49.997 "status": "finished", 00:20:49.997 "verify_range": { 00:20:49.997 "start": 0, 00:20:49.997 "length": 8192 00:20:49.997 }, 00:20:49.997 "queue_depth": 128, 00:20:49.997 "io_size": 4096, 00:20:49.997 "runtime": 1.022736, 00:20:49.997 "iops": 3359.615775723158, 00:20:49.997 "mibps": 13.123499123918586, 00:20:49.997 "io_failed": 0, 00:20:49.997 "io_timeout": 0, 00:20:49.997 "avg_latency_us": 37702.421689302806, 00:20:49.997 "min_latency_us": 10388.66962962963, 00:20:49.997 "max_latency_us": 43884.847407407404 00:20:49.997 } 00:20:49.997 ], 00:20:49.997 "core_count": 1 00:20:49.997 } 00:20:49.997 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:49.997 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:49.997 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:49.997 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:49.997 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:49.997 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:49.997 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:49.997 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:49.997 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:49.997 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:49.997 19:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:49.997 nvmf_trace.0 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 241683 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 241683 ']' 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 241683 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 241683 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 241683' 00:20:50.276 killing process with pid 241683 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 241683 00:20:50.276 Received shutdown signal, test time was about 1.000000 seconds 00:20:50.276 00:20:50.276 Latency(us) 00:20:50.276 [2024-12-06T18:19:35.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.276 [2024-12-06T18:19:35.325Z] =================================================================================================================== 00:20:50.276 [2024-12-06T18:19:35.325Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 241683 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:50.276 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:50.277 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:50.277 rmmod nvme_tcp 00:20:50.539 rmmod nvme_fabrics 00:20:50.539 rmmod nvme_keyring 00:20:50.539 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:50.539 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:50.539 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:50.539 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 241529 ']' 00:20:50.539 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 241529 00:20:50.539 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 241529 ']' 00:20:50.539 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 241529 00:20:50.539 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:50.539 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.539 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 241529 00:20:50.539 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:50.539 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:50.539 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 241529' 00:20:50.539 killing process with pid 241529 00:20:50.540 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 241529 00:20:50.540 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 241529 00:20:50.800 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:50.800 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:50.800 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:50.800 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:50.800 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:50.800 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:50.800 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:50.800 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.800 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:50.800 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.800 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.800 19:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.713 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:52.713 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ruUML2lgVE /tmp/tmp.6h9DV6vWdV /tmp/tmp.Nubxqbildy 00:20:52.713 00:20:52.713 real 1m23.738s 00:20:52.713 user 2m18.105s 00:20:52.713 sys 0m28.455s 00:20:52.713 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.713 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.713 ************************************ 00:20:52.713 END TEST nvmf_tls 00:20:52.713 ************************************ 00:20:52.713 19:19:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:52.713 19:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:52.713 19:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.713 19:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:52.713 ************************************ 00:20:52.713 START TEST nvmf_fips 00:20:52.713 ************************************ 00:20:52.713 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:52.974 * Looking for test storage... 00:20:52.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:52.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.974 --rc genhtml_branch_coverage=1 00:20:52.974 --rc genhtml_function_coverage=1 00:20:52.974 --rc genhtml_legend=1 00:20:52.974 --rc geninfo_all_blocks=1 00:20:52.974 --rc geninfo_unexecuted_blocks=1 00:20:52.974 00:20:52.974 ' 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:52.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.974 --rc genhtml_branch_coverage=1 00:20:52.974 --rc genhtml_function_coverage=1 00:20:52.974 --rc genhtml_legend=1 00:20:52.974 --rc geninfo_all_blocks=1 00:20:52.974 --rc geninfo_unexecuted_blocks=1 00:20:52.974 00:20:52.974 ' 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:52.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.974 --rc genhtml_branch_coverage=1 00:20:52.974 --rc genhtml_function_coverage=1 00:20:52.974 --rc genhtml_legend=1 00:20:52.974 --rc geninfo_all_blocks=1 00:20:52.974 --rc geninfo_unexecuted_blocks=1 00:20:52.974 00:20:52.974 ' 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:52.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.974 --rc genhtml_branch_coverage=1 00:20:52.974 --rc genhtml_function_coverage=1 00:20:52.974 --rc genhtml_legend=1 00:20:52.974 --rc geninfo_all_blocks=1 00:20:52.974 --rc geninfo_unexecuted_blocks=1 00:20:52.974 00:20:52.974 ' 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.974 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:52.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:52.975 19:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:53.234 Error setting digest 00:20:53.234 40B209065A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:53.234 40B209065A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:53.234 19:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:55.769 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.769 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:55.769 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:55.770 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:55.770 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:55.770 Found net devices under 0000:84:00.0: cvl_0_0 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:55.770 Found net devices under 0000:84:00.1: cvl_0_1 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:55.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:20:55.770 00:20:55.770 --- 10.0.0.2 ping statistics --- 00:20:55.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.770 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:20:55.770 00:20:55.770 --- 10.0.0.1 ping statistics --- 00:20:55.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.770 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:55.770 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=244062 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 244062 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 244062 ']' 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:55.771 [2024-12-06 19:19:40.452280] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:55.771 [2024-12-06 19:19:40.452354] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.771 [2024-12-06 19:19:40.520508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.771 [2024-12-06 19:19:40.575170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.771 [2024-12-06 19:19:40.575221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.771 [2024-12-06 19:19:40.575259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.771 [2024-12-06 19:19:40.575269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.771 [2024-12-06 19:19:40.575278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.771 [2024-12-06 19:19:40.575844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.D3Q 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.D3Q 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.D3Q 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.D3Q 00:20:55.771 19:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:56.031 [2024-12-06 19:19:41.026201] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.031 [2024-12-06 19:19:41.042215] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:56.031 [2024-12-06 19:19:41.042484] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.290 malloc0 00:20:56.290 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:56.290 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:56.290 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=244209 00:20:56.290 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 244209 /var/tmp/bdevperf.sock 00:20:56.290 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 244209 ']' 00:20:56.290 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.290 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.290 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.290 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.290 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:56.290 [2024-12-06 19:19:41.173435] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:20:56.290 [2024-12-06 19:19:41.173512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244209 ] 00:20:56.290 [2024-12-06 19:19:41.240948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.290 [2024-12-06 19:19:41.297857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.549 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.549 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:56.549 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.D3Q 00:20:56.807 19:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:57.066 [2024-12-06 19:19:41.921609] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:57.066 TLSTESTn1 00:20:57.066 19:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:57.066 Running I/O for 10 seconds... 00:20:59.390 3299.00 IOPS, 12.89 MiB/s [2024-12-06T18:19:45.376Z] 3332.00 IOPS, 13.02 MiB/s [2024-12-06T18:19:46.333Z] 3376.00 IOPS, 13.19 MiB/s [2024-12-06T18:19:47.267Z] 3348.50 IOPS, 13.08 MiB/s [2024-12-06T18:19:48.204Z] 3404.40 IOPS, 13.30 MiB/s [2024-12-06T18:19:49.139Z] 3427.83 IOPS, 13.39 MiB/s [2024-12-06T18:19:50.512Z] 3435.29 IOPS, 13.42 MiB/s [2024-12-06T18:19:51.448Z] 3445.00 IOPS, 13.46 MiB/s [2024-12-06T18:19:52.387Z] 3458.33 IOPS, 13.51 MiB/s [2024-12-06T18:19:52.387Z] 3449.80 IOPS, 13.48 MiB/s 00:21:07.338 Latency(us) 00:21:07.338 [2024-12-06T18:19:52.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.338 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:07.338 Verification LBA range: start 0x0 length 0x2000 00:21:07.338 TLSTESTn1 : 10.02 3455.62 13.50 0.00 0.00 36983.16 7330.32 33204.91 00:21:07.338 [2024-12-06T18:19:52.387Z] =================================================================================================================== 00:21:07.338 [2024-12-06T18:19:52.387Z] Total : 3455.62 13.50 0.00 0.00 36983.16 7330.32 33204.91 00:21:07.338 { 00:21:07.338 "results": [ 00:21:07.338 { 00:21:07.338 "job": "TLSTESTn1", 00:21:07.338 "core_mask": "0x4", 00:21:07.338 "workload": "verify", 00:21:07.338 "status": "finished", 00:21:07.338 "verify_range": { 00:21:07.338 "start": 0, 00:21:07.338 "length": 8192 00:21:07.338 }, 00:21:07.338 "queue_depth": 128, 00:21:07.338 "io_size": 4096, 00:21:07.338 "runtime": 10.019621, 00:21:07.338 "iops": 3455.61972853065, 00:21:07.338 "mibps": 13.49851456457285, 00:21:07.338 "io_failed": 0, 00:21:07.338 "io_timeout": 0, 00:21:07.338 "avg_latency_us": 36983.16382008626, 00:21:07.338 "min_latency_us": 7330.322962962963, 00:21:07.338 "max_latency_us": 33204.90666666667 00:21:07.338 } 00:21:07.338 ], 00:21:07.339 "core_count": 1 00:21:07.339 } 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:07.339 nvmf_trace.0 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 244209 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 244209 ']' 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 244209 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 244209 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 244209' 00:21:07.339 killing process with pid 244209 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 244209 00:21:07.339 Received shutdown signal, test time was about 10.000000 seconds 00:21:07.339 00:21:07.339 Latency(us) 00:21:07.339 [2024-12-06T18:19:52.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.339 [2024-12-06T18:19:52.388Z] =================================================================================================================== 00:21:07.339 [2024-12-06T18:19:52.388Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:07.339 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 244209 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:07.599 rmmod nvme_tcp 00:21:07.599 rmmod nvme_fabrics 00:21:07.599 rmmod nvme_keyring 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 244062 ']' 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 244062 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 244062 ']' 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 244062 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 244062 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 244062' 00:21:07.599 killing process with pid 244062 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 244062 00:21:07.599 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 244062 00:21:07.859 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:07.859 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:07.859 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:07.859 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:07.859 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:07.859 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:07.859 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:07.859 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:07.859 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:07.859 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.859 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.859 19:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.397 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:10.397 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.D3Q 00:21:10.397 00:21:10.397 real 0m17.149s 00:21:10.397 user 0m21.494s 00:21:10.397 sys 0m6.666s 00:21:10.397 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.398 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:10.398 ************************************ 00:21:10.398 END TEST nvmf_fips 00:21:10.398 ************************************ 00:21:10.398 19:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:10.398 19:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:10.398 19:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.398 19:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:10.398 ************************************ 00:21:10.398 START TEST nvmf_control_msg_list 00:21:10.398 ************************************ 00:21:10.398 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:10.398 * Looking for test storage... 00:21:10.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:10.398 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:10.398 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:21:10.398 19:19:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:10.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.398 --rc genhtml_branch_coverage=1 00:21:10.398 --rc genhtml_function_coverage=1 00:21:10.398 --rc genhtml_legend=1 00:21:10.398 --rc geninfo_all_blocks=1 00:21:10.398 --rc geninfo_unexecuted_blocks=1 00:21:10.398 00:21:10.398 ' 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:10.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.398 --rc genhtml_branch_coverage=1 00:21:10.398 --rc genhtml_function_coverage=1 00:21:10.398 --rc genhtml_legend=1 00:21:10.398 --rc geninfo_all_blocks=1 00:21:10.398 --rc geninfo_unexecuted_blocks=1 00:21:10.398 00:21:10.398 ' 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:10.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.398 --rc genhtml_branch_coverage=1 00:21:10.398 --rc genhtml_function_coverage=1 00:21:10.398 --rc genhtml_legend=1 00:21:10.398 --rc geninfo_all_blocks=1 00:21:10.398 --rc geninfo_unexecuted_blocks=1 00:21:10.398 00:21:10.398 ' 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:10.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.398 --rc genhtml_branch_coverage=1 00:21:10.398 --rc genhtml_function_coverage=1 00:21:10.398 --rc genhtml_legend=1 00:21:10.398 --rc geninfo_all_blocks=1 00:21:10.398 --rc geninfo_unexecuted_blocks=1 00:21:10.398 00:21:10.398 ' 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.398 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:10.399 19:19:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.301 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.301 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:12.301 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:12.301 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:12.301 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:12.301 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:12.301 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:12.301 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:12.301 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:12.301 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:12.301 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:12.301 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:12.302 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:12.302 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:12.302 Found net devices under 0000:84:00.0: cvl_0_0 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:12.302 Found net devices under 0000:84:00.1: cvl_0_1 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:12.302 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:12.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:21:12.561 00:21:12.561 --- 10.0.0.2 ping statistics --- 00:21:12.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.561 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:21:12.561 00:21:12.561 --- 10.0.0.1 ping statistics --- 00:21:12.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.561 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=247495 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:12.561 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 247495 00:21:12.562 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 247495 ']' 00:21:12.562 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.562 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.562 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.562 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.562 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.562 [2024-12-06 19:19:57.464941] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:21:12.562 [2024-12-06 19:19:57.465039] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.562 [2024-12-06 19:19:57.533771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.562 [2024-12-06 19:19:57.586148] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.562 [2024-12-06 19:19:57.586214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.562 [2024-12-06 19:19:57.586228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.562 [2024-12-06 19:19:57.586238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.562 [2024-12-06 19:19:57.586248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.562 [2024-12-06 19:19:57.586876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.821 [2024-12-06 19:19:57.724568] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.821 Malloc0 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.821 [2024-12-06 19:19:57.764382] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=247518 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=247519 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=247520 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 247518 00:21:12.821 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:12.821 [2024-12-06 19:19:57.822876] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:12.821 [2024-12-06 19:19:57.832636] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:12.821 [2024-12-06 19:19:57.842756] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:14.202 Initializing NVMe Controllers 00:21:14.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:14.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:14.202 Initialization complete. Launching workers. 00:21:14.202 ======================================================== 00:21:14.202 Latency(us) 00:21:14.202 Device Information : IOPS MiB/s Average min max 00:21:14.202 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 81.00 0.32 12779.13 146.86 40977.16 00:21:14.202 ======================================================== 00:21:14.202 Total : 81.00 0.32 12779.13 146.86 40977.16 00:21:14.202 00:21:14.202 Initializing NVMe Controllers 00:21:14.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:14.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:14.202 Initialization complete. Launching workers. 00:21:14.202 ======================================================== 00:21:14.202 Latency(us) 00:21:14.202 Device Information : IOPS MiB/s Average min max 00:21:14.202 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 27.00 0.11 37887.37 364.15 41039.75 00:21:14.202 ======================================================== 00:21:14.202 Total : 27.00 0.11 37887.37 364.15 41039.75 00:21:14.202 00:21:14.202 Initializing NVMe Controllers 00:21:14.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:14.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:14.202 Initialization complete. Launching workers. 00:21:14.202 ======================================================== 00:21:14.202 Latency(us) 00:21:14.202 Device Information : IOPS MiB/s Average min max 00:21:14.202 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40912.84 40821.47 41272.81 00:21:14.202 ======================================================== 00:21:14.202 Total : 25.00 0.10 40912.84 40821.47 41272.81 00:21:14.202 00:21:14.202 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 247519 00:21:14.202 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 247520 00:21:14.202 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:14.202 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:14.202 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:14.202 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:14.202 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:14.202 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:14.202 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:14.202 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:14.202 rmmod nvme_tcp 00:21:14.202 rmmod nvme_fabrics 00:21:14.202 rmmod nvme_keyring 00:21:14.202 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:14.202 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:14.203 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:14.203 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 247495 ']' 00:21:14.203 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 247495 00:21:14.203 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 247495 ']' 00:21:14.203 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 247495 00:21:14.203 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:14.203 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.203 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 247495 00:21:14.203 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.203 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.203 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 247495' 00:21:14.203 killing process with pid 247495 00:21:14.203 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 247495 00:21:14.203 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 247495 00:21:14.463 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:14.463 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:14.463 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:14.463 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:14.463 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:14.463 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:14.463 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:14.463 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:14.463 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:14.463 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.463 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.463 19:19:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.999 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.999 00:21:16.999 real 0m6.557s 00:21:16.999 user 0m6.218s 00:21:16.999 sys 0m2.488s 00:21:16.999 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.999 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.999 ************************************ 00:21:16.999 END TEST nvmf_control_msg_list 00:21:16.999 ************************************ 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:17.000 ************************************ 00:21:17.000 START TEST nvmf_wait_for_buf 00:21:17.000 ************************************ 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:17.000 * Looking for test storage... 00:21:17.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:17.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.000 --rc genhtml_branch_coverage=1 00:21:17.000 --rc genhtml_function_coverage=1 00:21:17.000 --rc genhtml_legend=1 00:21:17.000 --rc geninfo_all_blocks=1 00:21:17.000 --rc geninfo_unexecuted_blocks=1 00:21:17.000 00:21:17.000 ' 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:17.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.000 --rc genhtml_branch_coverage=1 00:21:17.000 --rc genhtml_function_coverage=1 00:21:17.000 --rc genhtml_legend=1 00:21:17.000 --rc geninfo_all_blocks=1 00:21:17.000 --rc geninfo_unexecuted_blocks=1 00:21:17.000 00:21:17.000 ' 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:17.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.000 --rc genhtml_branch_coverage=1 00:21:17.000 --rc genhtml_function_coverage=1 00:21:17.000 --rc genhtml_legend=1 00:21:17.000 --rc geninfo_all_blocks=1 00:21:17.000 --rc geninfo_unexecuted_blocks=1 00:21:17.000 00:21:17.000 ' 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:17.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.000 --rc genhtml_branch_coverage=1 00:21:17.000 --rc genhtml_function_coverage=1 00:21:17.000 --rc genhtml_legend=1 00:21:17.000 --rc geninfo_all_blocks=1 00:21:17.000 --rc geninfo_unexecuted_blocks=1 00:21:17.000 00:21:17.000 ' 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:17.000 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:17.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:17.001 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:18.906 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.906 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:18.906 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:18.906 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:18.906 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:18.906 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:18.906 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:18.906 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:18.906 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:18.906 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:18.906 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:18.906 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:18.906 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:18.906 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:18.906 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:18.906 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.906 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:18.907 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:18.907 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:18.907 Found net devices under 0000:84:00.0: cvl_0_0 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:18.907 Found net devices under 0000:84:00.1: cvl_0_1 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:18.907 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.168 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.168 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.168 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:19.169 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:19.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:21:19.169 00:21:19.169 --- 10.0.0.2 ping statistics --- 00:21:19.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.169 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:21:19.169 00:21:19.169 --- 10.0.0.1 ping statistics --- 00:21:19.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.169 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=249735 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 249735 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 249735 ']' 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.169 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.169 [2024-12-06 19:20:04.085990] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:21:19.169 [2024-12-06 19:20:04.086095] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.169 [2024-12-06 19:20:04.159036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.169 [2024-12-06 19:20:04.216899] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.169 [2024-12-06 19:20:04.216966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.169 [2024-12-06 19:20:04.216980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.169 [2024-12-06 19:20:04.216992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.169 [2024-12-06 19:20:04.217001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.430 [2024-12-06 19:20:04.217691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.430 Malloc0 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.430 [2024-12-06 19:20:04.456898] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:19.430 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.431 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.431 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.431 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:19.431 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.431 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.431 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.431 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:19.431 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.431 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.689 [2024-12-06 19:20:04.481147] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.689 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.689 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:19.689 [2024-12-06 19:20:04.567826] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:21.062 Initializing NVMe Controllers 00:21:21.062 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:21.062 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:21.062 Initialization complete. Launching workers. 00:21:21.062 ======================================================== 00:21:21.062 Latency(us) 00:21:21.062 Device Information : IOPS MiB/s Average min max 00:21:21.062 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.93 16.12 32113.87 7215.93 63836.82 00:21:21.062 ======================================================== 00:21:21.062 Total : 128.93 16.12 32113.87 7215.93 63836.82 00:21:21.062 00:21:21.062 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:21.062 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:21.062 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.062 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:21.321 rmmod nvme_tcp 00:21:21.321 rmmod nvme_fabrics 00:21:21.321 rmmod nvme_keyring 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 249735 ']' 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 249735 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 249735 ']' 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 249735 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 249735 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 249735' 00:21:21.321 killing process with pid 249735 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 249735 00:21:21.321 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 249735 00:21:21.580 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:21.580 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:21.580 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:21.580 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:21.580 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:21.580 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:21.580 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:21.580 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:21.580 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:21.580 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.580 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.580 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.481 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:23.481 00:21:23.481 real 0m6.950s 00:21:23.481 user 0m3.379s 00:21:23.481 sys 0m2.037s 00:21:23.481 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.481 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:23.481 ************************************ 00:21:23.481 END TEST nvmf_wait_for_buf 00:21:23.481 ************************************ 00:21:23.481 19:20:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:23.481 19:20:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:23.481 19:20:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:23.481 19:20:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:23.481 19:20:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:23.481 19:20:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:26.010 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:26.010 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:26.010 Found net devices under 0000:84:00.0: cvl_0_0 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:26.010 Found net devices under 0000:84:00.1: cvl_0_1 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:26.010 ************************************ 00:21:26.010 START TEST nvmf_perf_adq 00:21:26.010 ************************************ 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:26.010 * Looking for test storage... 00:21:26.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:26.010 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:26.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.011 --rc genhtml_branch_coverage=1 00:21:26.011 --rc genhtml_function_coverage=1 00:21:26.011 --rc genhtml_legend=1 00:21:26.011 --rc geninfo_all_blocks=1 00:21:26.011 --rc geninfo_unexecuted_blocks=1 00:21:26.011 00:21:26.011 ' 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:26.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.011 --rc genhtml_branch_coverage=1 00:21:26.011 --rc genhtml_function_coverage=1 00:21:26.011 --rc genhtml_legend=1 00:21:26.011 --rc geninfo_all_blocks=1 00:21:26.011 --rc geninfo_unexecuted_blocks=1 00:21:26.011 00:21:26.011 ' 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:26.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.011 --rc genhtml_branch_coverage=1 00:21:26.011 --rc genhtml_function_coverage=1 00:21:26.011 --rc genhtml_legend=1 00:21:26.011 --rc geninfo_all_blocks=1 00:21:26.011 --rc geninfo_unexecuted_blocks=1 00:21:26.011 00:21:26.011 ' 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:26.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.011 --rc genhtml_branch_coverage=1 00:21:26.011 --rc genhtml_function_coverage=1 00:21:26.011 --rc genhtml_legend=1 00:21:26.011 --rc geninfo_all_blocks=1 00:21:26.011 --rc geninfo_unexecuted_blocks=1 00:21:26.011 00:21:26.011 ' 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:26.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:26.011 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.543 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.543 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:28.543 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:28.543 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:28.543 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:28.543 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:28.544 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:28.544 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:28.544 Found net devices under 0000:84:00.0: cvl_0_0 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:28.544 Found net devices under 0000:84:00.1: cvl_0_1 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:28.544 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:28.803 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:32.088 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.366 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:37.367 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:37.367 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:37.367 Found net devices under 0000:84:00.0: cvl_0_0 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:37.367 Found net devices under 0000:84:00.1: cvl_0_1 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:37.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:21:37.367 00:21:37.367 --- 10.0.0.2 ping statistics --- 00:21:37.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.367 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:37.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:21:37.367 00:21:37.367 --- 10.0.0.1 ping statistics --- 00:21:37.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.367 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=254728 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 254728 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 254728 ']' 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.367 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.368 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.368 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.368 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.368 [2024-12-06 19:20:21.992836] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:21:37.368 [2024-12-06 19:20:21.992919] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.368 [2024-12-06 19:20:22.064717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:37.368 [2024-12-06 19:20:22.123152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.368 [2024-12-06 19:20:22.123205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.368 [2024-12-06 19:20:22.123228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.368 [2024-12-06 19:20:22.123238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.368 [2024-12-06 19:20:22.123251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.368 [2024-12-06 19:20:22.124917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.368 [2024-12-06 19:20:22.124977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.368 [2024-12-06 19:20:22.125043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:37.368 [2024-12-06 19:20:22.125047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.368 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.368 [2024-12-06 19:20:22.411333] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.627 Malloc1 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.627 [2024-12-06 19:20:22.485232] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=254769 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:37.627 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:39.678 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:39.678 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.678 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.678 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.678 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:39.678 "tick_rate": 2700000000, 00:21:39.678 "poll_groups": [ 00:21:39.678 { 00:21:39.678 "name": "nvmf_tgt_poll_group_000", 00:21:39.678 "admin_qpairs": 1, 00:21:39.678 "io_qpairs": 1, 00:21:39.678 "current_admin_qpairs": 1, 00:21:39.678 "current_io_qpairs": 1, 00:21:39.678 "pending_bdev_io": 0, 00:21:39.678 "completed_nvme_io": 18567, 00:21:39.678 "transports": [ 00:21:39.678 { 00:21:39.678 "trtype": "TCP" 00:21:39.678 } 00:21:39.678 ] 00:21:39.678 }, 00:21:39.678 { 00:21:39.678 "name": "nvmf_tgt_poll_group_001", 00:21:39.678 "admin_qpairs": 0, 00:21:39.678 "io_qpairs": 1, 00:21:39.678 "current_admin_qpairs": 0, 00:21:39.678 "current_io_qpairs": 1, 00:21:39.678 "pending_bdev_io": 0, 00:21:39.678 "completed_nvme_io": 18931, 00:21:39.678 "transports": [ 00:21:39.678 { 00:21:39.678 "trtype": "TCP" 00:21:39.678 } 00:21:39.678 ] 00:21:39.678 }, 00:21:39.678 { 00:21:39.678 "name": "nvmf_tgt_poll_group_002", 00:21:39.678 "admin_qpairs": 0, 00:21:39.678 "io_qpairs": 1, 00:21:39.678 "current_admin_qpairs": 0, 00:21:39.678 "current_io_qpairs": 1, 00:21:39.678 "pending_bdev_io": 0, 00:21:39.678 "completed_nvme_io": 19136, 00:21:39.678 "transports": [ 00:21:39.678 { 00:21:39.678 "trtype": "TCP" 00:21:39.678 } 00:21:39.678 ] 00:21:39.678 }, 00:21:39.678 { 00:21:39.678 "name": "nvmf_tgt_poll_group_003", 00:21:39.678 "admin_qpairs": 0, 00:21:39.678 "io_qpairs": 1, 00:21:39.678 "current_admin_qpairs": 0, 00:21:39.678 "current_io_qpairs": 1, 00:21:39.678 "pending_bdev_io": 0, 00:21:39.678 "completed_nvme_io": 18527, 00:21:39.678 "transports": [ 00:21:39.678 { 00:21:39.678 "trtype": "TCP" 00:21:39.678 } 00:21:39.678 ] 00:21:39.678 } 00:21:39.678 ] 00:21:39.678 }' 00:21:39.678 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:39.678 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:39.678 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:39.678 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:39.678 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 254769 00:21:47.831 Initializing NVMe Controllers 00:21:47.831 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:47.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:47.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:47.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:47.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:47.831 Initialization complete. Launching workers. 00:21:47.831 ======================================================== 00:21:47.831 Latency(us) 00:21:47.831 Device Information : IOPS MiB/s Average min max 00:21:47.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10508.70 41.05 6090.79 2439.78 10056.71 00:21:47.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10419.20 40.70 6143.10 2243.03 10273.21 00:21:47.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10217.80 39.91 6262.75 2521.59 10345.82 00:21:47.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10308.70 40.27 6209.30 2022.20 10746.73 00:21:47.831 ======================================================== 00:21:47.831 Total : 41454.39 161.93 6175.80 2022.20 10746.73 00:21:47.831 00:21:47.831 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:47.831 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:47.831 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:47.831 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:47.831 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:47.831 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:47.831 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:47.831 rmmod nvme_tcp 00:21:47.831 rmmod nvme_fabrics 00:21:47.831 rmmod nvme_keyring 00:21:47.831 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:47.831 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:47.831 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:47.831 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 254728 ']' 00:21:47.831 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 254728 00:21:47.831 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 254728 ']' 00:21:47.831 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 254728 00:21:47.831 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:47.832 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.832 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 254728 00:21:47.832 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:47.832 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:47.832 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 254728' 00:21:47.832 killing process with pid 254728 00:21:47.832 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 254728 00:21:47.832 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 254728 00:21:48.092 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:48.092 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:48.092 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:48.092 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:48.092 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:48.092 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:48.092 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:48.092 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:48.092 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:48.092 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.092 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.092 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.629 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:50.629 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:50.629 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:50.629 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:50.889 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:52.793 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:58.067 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.067 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:58.068 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:58.068 Found net devices under 0000:84:00.0: cvl_0_0 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:58.068 Found net devices under 0000:84:00.1: cvl_0_1 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:58.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:21:58.068 00:21:58.068 --- 10.0.0.2 ping statistics --- 00:21:58.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.068 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:21:58.068 00:21:58.068 --- 10.0.0.1 ping statistics --- 00:21:58.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.068 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:58.068 net.core.busy_poll = 1 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:58.068 net.core.busy_read = 1 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:58.068 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:58.068 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:58.068 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:58.068 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:58.068 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:58.068 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:58.068 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:58.068 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.068 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=257399 00:21:58.068 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:58.068 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 257399 00:21:58.068 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 257399 ']' 00:21:58.068 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.068 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.068 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.068 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.068 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.327 [2024-12-06 19:20:43.136080] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:21:58.327 [2024-12-06 19:20:43.136163] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.327 [2024-12-06 19:20:43.208448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:58.327 [2024-12-06 19:20:43.265764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.327 [2024-12-06 19:20:43.265834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.327 [2024-12-06 19:20:43.265858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.327 [2024-12-06 19:20:43.265869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.327 [2024-12-06 19:20:43.265879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.327 [2024-12-06 19:20:43.267380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.327 [2024-12-06 19:20:43.267437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.327 [2024-12-06 19:20:43.267504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.327 [2024-12-06 19:20:43.267507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.327 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.327 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:58.327 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:58.327 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:58.327 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.585 [2024-12-06 19:20:43.549495] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.585 Malloc1 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:58.585 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.586 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.586 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.586 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:58.586 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.586 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.586 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.586 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.586 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.586 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.586 [2024-12-06 19:20:43.621015] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.586 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.586 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=257551 00:21:58.586 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:58.586 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:01.107 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:01.107 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.107 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.107 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.107 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:01.107 "tick_rate": 2700000000, 00:22:01.107 "poll_groups": [ 00:22:01.107 { 00:22:01.107 "name": "nvmf_tgt_poll_group_000", 00:22:01.107 "admin_qpairs": 1, 00:22:01.107 "io_qpairs": 1, 00:22:01.107 "current_admin_qpairs": 1, 00:22:01.107 "current_io_qpairs": 1, 00:22:01.107 "pending_bdev_io": 0, 00:22:01.107 "completed_nvme_io": 24823, 00:22:01.107 "transports": [ 00:22:01.107 { 00:22:01.107 "trtype": "TCP" 00:22:01.107 } 00:22:01.107 ] 00:22:01.107 }, 00:22:01.107 { 00:22:01.107 "name": "nvmf_tgt_poll_group_001", 00:22:01.107 "admin_qpairs": 0, 00:22:01.107 "io_qpairs": 3, 00:22:01.107 "current_admin_qpairs": 0, 00:22:01.107 "current_io_qpairs": 3, 00:22:01.107 "pending_bdev_io": 0, 00:22:01.107 "completed_nvme_io": 25974, 00:22:01.107 "transports": [ 00:22:01.107 { 00:22:01.107 "trtype": "TCP" 00:22:01.107 } 00:22:01.107 ] 00:22:01.107 }, 00:22:01.107 { 00:22:01.107 "name": "nvmf_tgt_poll_group_002", 00:22:01.107 "admin_qpairs": 0, 00:22:01.107 "io_qpairs": 0, 00:22:01.107 "current_admin_qpairs": 0, 00:22:01.107 "current_io_qpairs": 0, 00:22:01.107 "pending_bdev_io": 0, 00:22:01.107 "completed_nvme_io": 0, 00:22:01.107 "transports": [ 00:22:01.107 { 00:22:01.107 "trtype": "TCP" 00:22:01.107 } 00:22:01.107 ] 00:22:01.107 }, 00:22:01.107 { 00:22:01.107 "name": "nvmf_tgt_poll_group_003", 00:22:01.107 "admin_qpairs": 0, 00:22:01.107 "io_qpairs": 0, 00:22:01.107 "current_admin_qpairs": 0, 00:22:01.107 "current_io_qpairs": 0, 00:22:01.107 "pending_bdev_io": 0, 00:22:01.107 "completed_nvme_io": 0, 00:22:01.107 "transports": [ 00:22:01.107 { 00:22:01.107 "trtype": "TCP" 00:22:01.107 } 00:22:01.107 ] 00:22:01.107 } 00:22:01.107 ] 00:22:01.107 }' 00:22:01.107 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:01.107 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:01.107 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:01.107 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:01.107 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 257551 00:22:09.210 Initializing NVMe Controllers 00:22:09.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:09.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:09.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:09.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:09.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:09.210 Initialization complete. Launching workers. 00:22:09.210 ======================================================== 00:22:09.210 Latency(us) 00:22:09.210 Device Information : IOPS MiB/s Average min max 00:22:09.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4989.15 19.49 12886.44 1916.77 61466.53 00:22:09.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4426.38 17.29 14498.42 1850.24 61212.75 00:22:09.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13376.94 52.25 4784.01 1959.66 46591.07 00:22:09.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4232.89 16.53 15172.19 2155.45 62843.72 00:22:09.210 ======================================================== 00:22:09.210 Total : 27025.36 105.57 9497.95 1850.24 62843.72 00:22:09.210 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:09.210 rmmod nvme_tcp 00:22:09.210 rmmod nvme_fabrics 00:22:09.210 rmmod nvme_keyring 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 257399 ']' 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 257399 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 257399 ']' 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 257399 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 257399 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 257399' 00:22:09.210 killing process with pid 257399 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 257399 00:22:09.210 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 257399 00:22:09.210 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:09.210 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:09.210 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:09.210 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:09.210 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:09.210 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:09.210 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:09.210 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:09.210 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:09.210 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.210 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.210 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:11.742 00:22:11.742 real 0m45.552s 00:22:11.742 user 2m41.527s 00:22:11.742 sys 0m10.806s 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.742 ************************************ 00:22:11.742 END TEST nvmf_perf_adq 00:22:11.742 ************************************ 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:11.742 ************************************ 00:22:11.742 START TEST nvmf_shutdown 00:22:11.742 ************************************ 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:11.742 * Looking for test storage... 00:22:11.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:11.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.742 --rc genhtml_branch_coverage=1 00:22:11.742 --rc genhtml_function_coverage=1 00:22:11.742 --rc genhtml_legend=1 00:22:11.742 --rc geninfo_all_blocks=1 00:22:11.742 --rc geninfo_unexecuted_blocks=1 00:22:11.742 00:22:11.742 ' 00:22:11.742 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:11.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.743 --rc genhtml_branch_coverage=1 00:22:11.743 --rc genhtml_function_coverage=1 00:22:11.743 --rc genhtml_legend=1 00:22:11.743 --rc geninfo_all_blocks=1 00:22:11.743 --rc geninfo_unexecuted_blocks=1 00:22:11.743 00:22:11.743 ' 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:11.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.743 --rc genhtml_branch_coverage=1 00:22:11.743 --rc genhtml_function_coverage=1 00:22:11.743 --rc genhtml_legend=1 00:22:11.743 --rc geninfo_all_blocks=1 00:22:11.743 --rc geninfo_unexecuted_blocks=1 00:22:11.743 00:22:11.743 ' 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:11.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.743 --rc genhtml_branch_coverage=1 00:22:11.743 --rc genhtml_function_coverage=1 00:22:11.743 --rc genhtml_legend=1 00:22:11.743 --rc geninfo_all_blocks=1 00:22:11.743 --rc geninfo_unexecuted_blocks=1 00:22:11.743 00:22:11.743 ' 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:11.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:11.743 ************************************ 00:22:11.743 START TEST nvmf_shutdown_tc1 00:22:11.743 ************************************ 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:11.743 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:13.646 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:13.646 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:13.646 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:13.646 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:13.646 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:13.646 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:13.646 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:13.646 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:13.647 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:13.647 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:13.647 Found net devices under 0000:84:00.0: cvl_0_0 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:13.647 Found net devices under 0000:84:00.1: cvl_0_1 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:13.647 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:13.648 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:13.648 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:13.648 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:13.648 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:13.648 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.648 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:13.648 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:13.648 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:13.648 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:13.648 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:13.648 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:13.648 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:13.648 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:13.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:22:13.907 00:22:13.907 --- 10.0.0.2 ping statistics --- 00:22:13.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.907 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:13.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:22:13.907 00:22:13.907 --- 10.0.0.1 ping statistics --- 00:22:13.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.907 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=260734 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 260734 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 260734 ']' 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.907 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:13.907 [2024-12-06 19:20:58.818616] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:22:13.907 [2024-12-06 19:20:58.818694] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.907 [2024-12-06 19:20:58.890856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:13.907 [2024-12-06 19:20:58.949524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.907 [2024-12-06 19:20:58.949577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.907 [2024-12-06 19:20:58.949592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.907 [2024-12-06 19:20:58.949603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.907 [2024-12-06 19:20:58.949614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.907 [2024-12-06 19:20:58.951353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.907 [2024-12-06 19:20:58.951418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:13.907 [2024-12-06 19:20:58.951484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:13.907 [2024-12-06 19:20:58.951487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:14.167 [2024-12-06 19:20:59.125921] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.167 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:14.167 Malloc1 00:22:14.428 [2024-12-06 19:20:59.223456] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.428 Malloc2 00:22:14.428 Malloc3 00:22:14.428 Malloc4 00:22:14.428 Malloc5 00:22:14.428 Malloc6 00:22:14.689 Malloc7 00:22:14.689 Malloc8 00:22:14.689 Malloc9 00:22:14.689 Malloc10 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=260905 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 260905 /var/tmp/bdevperf.sock 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 260905 ']' 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:14.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:14.689 { 00:22:14.689 "params": { 00:22:14.689 "name": "Nvme$subsystem", 00:22:14.689 "trtype": "$TEST_TRANSPORT", 00:22:14.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.689 "adrfam": "ipv4", 00:22:14.689 "trsvcid": "$NVMF_PORT", 00:22:14.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.689 "hdgst": ${hdgst:-false}, 00:22:14.689 "ddgst": ${ddgst:-false} 00:22:14.689 }, 00:22:14.689 "method": "bdev_nvme_attach_controller" 00:22:14.689 } 00:22:14.689 EOF 00:22:14.689 )") 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:14.689 { 00:22:14.689 "params": { 00:22:14.689 "name": "Nvme$subsystem", 00:22:14.689 "trtype": "$TEST_TRANSPORT", 00:22:14.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.689 "adrfam": "ipv4", 00:22:14.689 "trsvcid": "$NVMF_PORT", 00:22:14.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.689 "hdgst": ${hdgst:-false}, 00:22:14.689 "ddgst": ${ddgst:-false} 00:22:14.689 }, 00:22:14.689 "method": "bdev_nvme_attach_controller" 00:22:14.689 } 00:22:14.689 EOF 00:22:14.689 )") 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:14.689 { 00:22:14.689 "params": { 00:22:14.689 "name": "Nvme$subsystem", 00:22:14.689 "trtype": "$TEST_TRANSPORT", 00:22:14.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.689 "adrfam": "ipv4", 00:22:14.689 "trsvcid": "$NVMF_PORT", 00:22:14.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.689 "hdgst": ${hdgst:-false}, 00:22:14.689 "ddgst": ${ddgst:-false} 00:22:14.689 }, 00:22:14.689 "method": "bdev_nvme_attach_controller" 00:22:14.689 } 00:22:14.689 EOF 00:22:14.689 )") 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:14.689 { 00:22:14.689 "params": { 00:22:14.689 "name": "Nvme$subsystem", 00:22:14.689 "trtype": "$TEST_TRANSPORT", 00:22:14.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.689 "adrfam": "ipv4", 00:22:14.689 "trsvcid": "$NVMF_PORT", 00:22:14.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.689 "hdgst": ${hdgst:-false}, 00:22:14.689 "ddgst": ${ddgst:-false} 00:22:14.689 }, 00:22:14.689 "method": "bdev_nvme_attach_controller" 00:22:14.689 } 00:22:14.689 EOF 00:22:14.689 )") 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:14.689 { 00:22:14.689 "params": { 00:22:14.689 "name": "Nvme$subsystem", 00:22:14.689 "trtype": "$TEST_TRANSPORT", 00:22:14.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.689 "adrfam": "ipv4", 00:22:14.689 "trsvcid": "$NVMF_PORT", 00:22:14.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.689 "hdgst": ${hdgst:-false}, 00:22:14.689 "ddgst": ${ddgst:-false} 00:22:14.689 }, 00:22:14.689 "method": "bdev_nvme_attach_controller" 00:22:14.689 } 00:22:14.689 EOF 00:22:14.689 )") 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:14.689 { 00:22:14.689 "params": { 00:22:14.689 "name": "Nvme$subsystem", 00:22:14.689 "trtype": "$TEST_TRANSPORT", 00:22:14.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.689 "adrfam": "ipv4", 00:22:14.689 "trsvcid": "$NVMF_PORT", 00:22:14.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.689 "hdgst": ${hdgst:-false}, 00:22:14.689 "ddgst": ${ddgst:-false} 00:22:14.689 }, 00:22:14.689 "method": "bdev_nvme_attach_controller" 00:22:14.689 } 00:22:14.689 EOF 00:22:14.689 )") 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:14.689 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:14.689 { 00:22:14.689 "params": { 00:22:14.689 "name": "Nvme$subsystem", 00:22:14.689 "trtype": "$TEST_TRANSPORT", 00:22:14.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.689 "adrfam": "ipv4", 00:22:14.689 "trsvcid": "$NVMF_PORT", 00:22:14.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.689 "hdgst": ${hdgst:-false}, 00:22:14.689 "ddgst": ${ddgst:-false} 00:22:14.689 }, 00:22:14.690 "method": "bdev_nvme_attach_controller" 00:22:14.690 } 00:22:14.690 EOF 00:22:14.690 )") 00:22:14.690 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:14.690 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:14.690 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:14.690 { 00:22:14.690 "params": { 00:22:14.690 "name": "Nvme$subsystem", 00:22:14.690 "trtype": "$TEST_TRANSPORT", 00:22:14.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.690 "adrfam": "ipv4", 00:22:14.690 "trsvcid": "$NVMF_PORT", 00:22:14.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.690 "hdgst": ${hdgst:-false}, 00:22:14.690 "ddgst": ${ddgst:-false} 00:22:14.690 }, 00:22:14.690 "method": "bdev_nvme_attach_controller" 00:22:14.690 } 00:22:14.690 EOF 00:22:14.690 )") 00:22:14.690 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:14.690 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:14.690 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:14.690 { 00:22:14.690 "params": { 00:22:14.690 "name": "Nvme$subsystem", 00:22:14.690 "trtype": "$TEST_TRANSPORT", 00:22:14.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.690 "adrfam": "ipv4", 00:22:14.690 "trsvcid": "$NVMF_PORT", 00:22:14.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.690 "hdgst": ${hdgst:-false}, 00:22:14.690 "ddgst": ${ddgst:-false} 00:22:14.690 }, 00:22:14.690 "method": "bdev_nvme_attach_controller" 00:22:14.690 } 00:22:14.690 EOF 00:22:14.690 )") 00:22:14.690 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:14.690 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:14.690 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:14.690 { 00:22:14.690 "params": { 00:22:14.690 "name": "Nvme$subsystem", 00:22:14.690 "trtype": "$TEST_TRANSPORT", 00:22:14.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.690 "adrfam": "ipv4", 00:22:14.690 "trsvcid": "$NVMF_PORT", 00:22:14.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.690 "hdgst": ${hdgst:-false}, 00:22:14.690 "ddgst": ${ddgst:-false} 00:22:14.690 }, 00:22:14.690 "method": "bdev_nvme_attach_controller" 00:22:14.690 } 00:22:14.690 EOF 00:22:14.690 )") 00:22:14.690 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:14.690 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:14.690 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:14.690 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:14.690 "params": { 00:22:14.690 "name": "Nvme1", 00:22:14.690 "trtype": "tcp", 00:22:14.690 "traddr": "10.0.0.2", 00:22:14.690 "adrfam": "ipv4", 00:22:14.690 "trsvcid": "4420", 00:22:14.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:14.690 "hdgst": false, 00:22:14.690 "ddgst": false 00:22:14.690 }, 00:22:14.690 "method": "bdev_nvme_attach_controller" 00:22:14.690 },{ 00:22:14.690 "params": { 00:22:14.690 "name": "Nvme2", 00:22:14.690 "trtype": "tcp", 00:22:14.690 "traddr": "10.0.0.2", 00:22:14.690 "adrfam": "ipv4", 00:22:14.690 "trsvcid": "4420", 00:22:14.690 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:14.690 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:14.690 "hdgst": false, 00:22:14.690 "ddgst": false 00:22:14.690 }, 00:22:14.690 "method": "bdev_nvme_attach_controller" 00:22:14.690 },{ 00:22:14.690 "params": { 00:22:14.690 "name": "Nvme3", 00:22:14.690 "trtype": "tcp", 00:22:14.690 "traddr": "10.0.0.2", 00:22:14.690 "adrfam": "ipv4", 00:22:14.690 "trsvcid": "4420", 00:22:14.690 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:14.690 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:14.690 "hdgst": false, 00:22:14.690 "ddgst": false 00:22:14.690 }, 00:22:14.690 "method": "bdev_nvme_attach_controller" 00:22:14.690 },{ 00:22:14.690 "params": { 00:22:14.690 "name": "Nvme4", 00:22:14.690 "trtype": "tcp", 00:22:14.690 "traddr": "10.0.0.2", 00:22:14.690 "adrfam": "ipv4", 00:22:14.690 "trsvcid": "4420", 00:22:14.690 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:14.690 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:14.690 "hdgst": false, 00:22:14.690 "ddgst": false 00:22:14.690 }, 00:22:14.690 "method": "bdev_nvme_attach_controller" 00:22:14.690 },{ 00:22:14.690 "params": { 00:22:14.690 "name": "Nvme5", 00:22:14.690 "trtype": "tcp", 00:22:14.690 "traddr": "10.0.0.2", 00:22:14.690 "adrfam": "ipv4", 00:22:14.690 "trsvcid": "4420", 00:22:14.690 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:14.690 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:14.690 "hdgst": false, 00:22:14.690 "ddgst": false 00:22:14.690 }, 00:22:14.690 "method": "bdev_nvme_attach_controller" 00:22:14.690 },{ 00:22:14.690 "params": { 00:22:14.690 "name": "Nvme6", 00:22:14.690 "trtype": "tcp", 00:22:14.690 "traddr": "10.0.0.2", 00:22:14.690 "adrfam": "ipv4", 00:22:14.690 "trsvcid": "4420", 00:22:14.690 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:14.690 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:14.690 "hdgst": false, 00:22:14.690 "ddgst": false 00:22:14.690 }, 00:22:14.690 "method": "bdev_nvme_attach_controller" 00:22:14.690 },{ 00:22:14.690 "params": { 00:22:14.690 "name": "Nvme7", 00:22:14.690 "trtype": "tcp", 00:22:14.690 "traddr": "10.0.0.2", 00:22:14.690 "adrfam": "ipv4", 00:22:14.690 "trsvcid": "4420", 00:22:14.690 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:14.690 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:14.690 "hdgst": false, 00:22:14.690 "ddgst": false 00:22:14.690 }, 00:22:14.690 "method": "bdev_nvme_attach_controller" 00:22:14.690 },{ 00:22:14.690 "params": { 00:22:14.690 "name": "Nvme8", 00:22:14.690 "trtype": "tcp", 00:22:14.690 "traddr": "10.0.0.2", 00:22:14.690 "adrfam": "ipv4", 00:22:14.690 "trsvcid": "4420", 00:22:14.690 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:14.690 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:14.690 "hdgst": false, 00:22:14.690 "ddgst": false 00:22:14.690 }, 00:22:14.690 "method": "bdev_nvme_attach_controller" 00:22:14.690 },{ 00:22:14.690 "params": { 00:22:14.690 "name": "Nvme9", 00:22:14.690 "trtype": "tcp", 00:22:14.690 "traddr": "10.0.0.2", 00:22:14.690 "adrfam": "ipv4", 00:22:14.690 "trsvcid": "4420", 00:22:14.690 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:14.690 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:14.690 "hdgst": false, 00:22:14.690 "ddgst": false 00:22:14.690 }, 00:22:14.690 "method": "bdev_nvme_attach_controller" 00:22:14.690 },{ 00:22:14.690 "params": { 00:22:14.690 "name": "Nvme10", 00:22:14.690 "trtype": "tcp", 00:22:14.690 "traddr": "10.0.0.2", 00:22:14.690 "adrfam": "ipv4", 00:22:14.690 "trsvcid": "4420", 00:22:14.690 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:14.690 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:14.690 "hdgst": false, 00:22:14.690 "ddgst": false 00:22:14.690 }, 00:22:14.690 "method": "bdev_nvme_attach_controller" 00:22:14.690 }' 00:22:14.951 [2024-12-06 19:20:59.741891] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:22:14.951 [2024-12-06 19:20:59.741971] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:14.951 [2024-12-06 19:20:59.816478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.951 [2024-12-06 19:20:59.876107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.851 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.851 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:16.851 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:16.851 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.851 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:16.851 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.851 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 260905 00:22:16.851 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:16.851 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:17.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 260905 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 260734 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.785 { 00:22:17.785 "params": { 00:22:17.785 "name": "Nvme$subsystem", 00:22:17.785 "trtype": "$TEST_TRANSPORT", 00:22:17.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.785 "adrfam": "ipv4", 00:22:17.785 "trsvcid": "$NVMF_PORT", 00:22:17.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.785 "hdgst": ${hdgst:-false}, 00:22:17.785 "ddgst": ${ddgst:-false} 00:22:17.785 }, 00:22:17.785 "method": "bdev_nvme_attach_controller" 00:22:17.785 } 00:22:17.785 EOF 00:22:17.785 )") 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.785 { 00:22:17.785 "params": { 00:22:17.785 "name": "Nvme$subsystem", 00:22:17.785 "trtype": "$TEST_TRANSPORT", 00:22:17.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.785 "adrfam": "ipv4", 00:22:17.785 "trsvcid": "$NVMF_PORT", 00:22:17.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.785 "hdgst": ${hdgst:-false}, 00:22:17.785 "ddgst": ${ddgst:-false} 00:22:17.785 }, 00:22:17.785 "method": "bdev_nvme_attach_controller" 00:22:17.785 } 00:22:17.785 EOF 00:22:17.785 )") 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.785 { 00:22:17.785 "params": { 00:22:17.785 "name": "Nvme$subsystem", 00:22:17.785 "trtype": "$TEST_TRANSPORT", 00:22:17.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.785 "adrfam": "ipv4", 00:22:17.785 "trsvcid": "$NVMF_PORT", 00:22:17.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.785 "hdgst": ${hdgst:-false}, 00:22:17.785 "ddgst": ${ddgst:-false} 00:22:17.785 }, 00:22:17.785 "method": "bdev_nvme_attach_controller" 00:22:17.785 } 00:22:17.785 EOF 00:22:17.785 )") 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.785 { 00:22:17.785 "params": { 00:22:17.785 "name": "Nvme$subsystem", 00:22:17.785 "trtype": "$TEST_TRANSPORT", 00:22:17.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.785 "adrfam": "ipv4", 00:22:17.785 "trsvcid": "$NVMF_PORT", 00:22:17.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.785 "hdgst": ${hdgst:-false}, 00:22:17.785 "ddgst": ${ddgst:-false} 00:22:17.785 }, 00:22:17.785 "method": "bdev_nvme_attach_controller" 00:22:17.785 } 00:22:17.785 EOF 00:22:17.785 )") 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.785 { 00:22:17.785 "params": { 00:22:17.785 "name": "Nvme$subsystem", 00:22:17.785 "trtype": "$TEST_TRANSPORT", 00:22:17.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.785 "adrfam": "ipv4", 00:22:17.785 "trsvcid": "$NVMF_PORT", 00:22:17.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.785 "hdgst": ${hdgst:-false}, 00:22:17.785 "ddgst": ${ddgst:-false} 00:22:17.785 }, 00:22:17.785 "method": "bdev_nvme_attach_controller" 00:22:17.785 } 00:22:17.785 EOF 00:22:17.785 )") 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.785 { 00:22:17.785 "params": { 00:22:17.785 "name": "Nvme$subsystem", 00:22:17.785 "trtype": "$TEST_TRANSPORT", 00:22:17.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.785 "adrfam": "ipv4", 00:22:17.785 "trsvcid": "$NVMF_PORT", 00:22:17.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.785 "hdgst": ${hdgst:-false}, 00:22:17.785 "ddgst": ${ddgst:-false} 00:22:17.785 }, 00:22:17.785 "method": "bdev_nvme_attach_controller" 00:22:17.785 } 00:22:17.785 EOF 00:22:17.785 )") 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.785 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.785 { 00:22:17.785 "params": { 00:22:17.785 "name": "Nvme$subsystem", 00:22:17.785 "trtype": "$TEST_TRANSPORT", 00:22:17.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.785 "adrfam": "ipv4", 00:22:17.785 "trsvcid": "$NVMF_PORT", 00:22:17.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.785 "hdgst": ${hdgst:-false}, 00:22:17.785 "ddgst": ${ddgst:-false} 00:22:17.785 }, 00:22:17.785 "method": "bdev_nvme_attach_controller" 00:22:17.785 } 00:22:17.785 EOF 00:22:17.786 )") 00:22:17.786 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.786 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.786 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.786 { 00:22:17.786 "params": { 00:22:17.786 "name": "Nvme$subsystem", 00:22:17.786 "trtype": "$TEST_TRANSPORT", 00:22:17.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.786 "adrfam": "ipv4", 00:22:17.786 "trsvcid": "$NVMF_PORT", 00:22:17.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.786 "hdgst": ${hdgst:-false}, 00:22:17.786 "ddgst": ${ddgst:-false} 00:22:17.786 }, 00:22:17.786 "method": "bdev_nvme_attach_controller" 00:22:17.786 } 00:22:17.786 EOF 00:22:17.786 )") 00:22:17.786 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.786 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.786 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.786 { 00:22:17.786 "params": { 00:22:17.786 "name": "Nvme$subsystem", 00:22:17.786 "trtype": "$TEST_TRANSPORT", 00:22:17.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.786 "adrfam": "ipv4", 00:22:17.786 "trsvcid": "$NVMF_PORT", 00:22:17.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.786 "hdgst": ${hdgst:-false}, 00:22:17.786 "ddgst": ${ddgst:-false} 00:22:17.786 }, 00:22:17.786 "method": "bdev_nvme_attach_controller" 00:22:17.786 } 00:22:17.786 EOF 00:22:17.786 )") 00:22:17.786 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.786 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.786 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.786 { 00:22:17.786 "params": { 00:22:17.786 "name": "Nvme$subsystem", 00:22:17.786 "trtype": "$TEST_TRANSPORT", 00:22:17.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.786 "adrfam": "ipv4", 00:22:17.786 "trsvcid": "$NVMF_PORT", 00:22:17.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.786 "hdgst": ${hdgst:-false}, 00:22:17.786 "ddgst": ${ddgst:-false} 00:22:17.786 }, 00:22:17.786 "method": "bdev_nvme_attach_controller" 00:22:17.786 } 00:22:17.786 EOF 00:22:17.786 )") 00:22:17.786 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:17.786 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:17.786 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:17.786 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:17.786 "params": { 00:22:17.786 "name": "Nvme1", 00:22:17.786 "trtype": "tcp", 00:22:17.786 "traddr": "10.0.0.2", 00:22:17.786 "adrfam": "ipv4", 00:22:17.786 "trsvcid": "4420", 00:22:17.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.786 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.786 "hdgst": false, 00:22:17.786 "ddgst": false 00:22:17.786 }, 00:22:17.786 "method": "bdev_nvme_attach_controller" 00:22:17.786 },{ 00:22:17.786 "params": { 00:22:17.786 "name": "Nvme2", 00:22:17.786 "trtype": "tcp", 00:22:17.786 "traddr": "10.0.0.2", 00:22:17.786 "adrfam": "ipv4", 00:22:17.786 "trsvcid": "4420", 00:22:17.786 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:17.786 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:17.786 "hdgst": false, 00:22:17.786 "ddgst": false 00:22:17.786 }, 00:22:17.786 "method": "bdev_nvme_attach_controller" 00:22:17.786 },{ 00:22:17.786 "params": { 00:22:17.786 "name": "Nvme3", 00:22:17.786 "trtype": "tcp", 00:22:17.786 "traddr": "10.0.0.2", 00:22:17.786 "adrfam": "ipv4", 00:22:17.786 "trsvcid": "4420", 00:22:17.786 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:17.786 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:17.786 "hdgst": false, 00:22:17.786 "ddgst": false 00:22:17.786 }, 00:22:17.786 "method": "bdev_nvme_attach_controller" 00:22:17.786 },{ 00:22:17.786 "params": { 00:22:17.786 "name": "Nvme4", 00:22:17.786 "trtype": "tcp", 00:22:17.786 "traddr": "10.0.0.2", 00:22:17.786 "adrfam": "ipv4", 00:22:17.786 "trsvcid": "4420", 00:22:17.786 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:17.786 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:17.786 "hdgst": false, 00:22:17.786 "ddgst": false 00:22:17.786 }, 00:22:17.786 "method": "bdev_nvme_attach_controller" 00:22:17.786 },{ 00:22:17.786 "params": { 00:22:17.786 "name": "Nvme5", 00:22:17.786 "trtype": "tcp", 00:22:17.786 "traddr": "10.0.0.2", 00:22:17.786 "adrfam": "ipv4", 00:22:17.786 "trsvcid": "4420", 00:22:17.786 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:17.786 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:17.786 "hdgst": false, 00:22:17.786 "ddgst": false 00:22:17.786 }, 00:22:17.786 "method": "bdev_nvme_attach_controller" 00:22:17.786 },{ 00:22:17.786 "params": { 00:22:17.786 "name": "Nvme6", 00:22:17.786 "trtype": "tcp", 00:22:17.786 "traddr": "10.0.0.2", 00:22:17.786 "adrfam": "ipv4", 00:22:17.786 "trsvcid": "4420", 00:22:17.786 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:17.786 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:17.786 "hdgst": false, 00:22:17.786 "ddgst": false 00:22:17.786 }, 00:22:17.786 "method": "bdev_nvme_attach_controller" 00:22:17.786 },{ 00:22:17.786 "params": { 00:22:17.786 "name": "Nvme7", 00:22:17.786 "trtype": "tcp", 00:22:17.786 "traddr": "10.0.0.2", 00:22:17.786 "adrfam": "ipv4", 00:22:17.786 "trsvcid": "4420", 00:22:17.786 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:17.786 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:17.786 "hdgst": false, 00:22:17.786 "ddgst": false 00:22:17.786 }, 00:22:17.786 "method": "bdev_nvme_attach_controller" 00:22:17.786 },{ 00:22:17.786 "params": { 00:22:17.786 "name": "Nvme8", 00:22:17.786 "trtype": "tcp", 00:22:17.786 "traddr": "10.0.0.2", 00:22:17.786 "adrfam": "ipv4", 00:22:17.786 "trsvcid": "4420", 00:22:17.786 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:17.786 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:17.786 "hdgst": false, 00:22:17.786 "ddgst": false 00:22:17.786 }, 00:22:17.786 "method": "bdev_nvme_attach_controller" 00:22:17.786 },{ 00:22:17.786 "params": { 00:22:17.786 "name": "Nvme9", 00:22:17.786 "trtype": "tcp", 00:22:17.786 "traddr": "10.0.0.2", 00:22:17.786 "adrfam": "ipv4", 00:22:17.786 "trsvcid": "4420", 00:22:17.786 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:17.786 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:17.786 "hdgst": false, 00:22:17.786 "ddgst": false 00:22:17.786 }, 00:22:17.786 "method": "bdev_nvme_attach_controller" 00:22:17.786 },{ 00:22:17.786 "params": { 00:22:17.786 "name": "Nvme10", 00:22:17.786 "trtype": "tcp", 00:22:17.786 "traddr": "10.0.0.2", 00:22:17.786 "adrfam": "ipv4", 00:22:17.786 "trsvcid": "4420", 00:22:17.786 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:17.786 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:17.786 "hdgst": false, 00:22:17.786 "ddgst": false 00:22:17.786 }, 00:22:17.786 "method": "bdev_nvme_attach_controller" 00:22:17.786 }' 00:22:17.786 [2024-12-06 19:21:02.829412] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:22:17.786 [2024-12-06 19:21:02.829498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid261418 ] 00:22:18.045 [2024-12-06 19:21:02.903396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.045 [2024-12-06 19:21:02.964946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.419 Running I/O for 1 seconds... 00:22:20.610 1673.00 IOPS, 104.56 MiB/s 00:22:20.610 Latency(us) 00:22:20.610 [2024-12-06T18:21:05.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.610 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:20.610 Verification LBA range: start 0x0 length 0x400 00:22:20.610 Nvme1n1 : 1.18 217.22 13.58 0.00 0.00 291782.54 21262.79 270299.59 00:22:20.610 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:20.610 Verification LBA range: start 0x0 length 0x400 00:22:20.610 Nvme2n1 : 1.16 220.83 13.80 0.00 0.00 281986.47 19903.53 267192.70 00:22:20.611 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:20.611 Verification LBA range: start 0x0 length 0x400 00:22:20.611 Nvme3n1 : 1.16 221.63 13.85 0.00 0.00 276586.76 22330.79 267192.70 00:22:20.611 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:20.611 Verification LBA range: start 0x0 length 0x400 00:22:20.611 Nvme4n1 : 1.14 223.98 14.00 0.00 0.00 267597.18 20971.52 273406.48 00:22:20.611 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:20.611 Verification LBA range: start 0x0 length 0x400 00:22:20.611 Nvme5n1 : 1.17 219.30 13.71 0.00 0.00 270773.85 19806.44 270299.59 00:22:20.611 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:20.611 Verification LBA range: start 0x0 length 0x400 00:22:20.611 Nvme6n1 : 1.19 215.45 13.47 0.00 0.00 271422.96 20874.43 290494.39 00:22:20.611 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:20.611 Verification LBA range: start 0x0 length 0x400 00:22:20.611 Nvme7n1 : 1.17 218.45 13.65 0.00 0.00 262736.97 21262.79 267192.70 00:22:20.611 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:20.611 Verification LBA range: start 0x0 length 0x400 00:22:20.611 Nvme8n1 : 1.19 219.39 13.71 0.00 0.00 257034.99 3543.80 284280.60 00:22:20.611 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:20.611 Verification LBA range: start 0x0 length 0x400 00:22:20.611 Nvme9n1 : 1.19 218.68 13.67 0.00 0.00 253902.08 2572.89 287387.50 00:22:20.611 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:20.611 Verification LBA range: start 0x0 length 0x400 00:22:20.611 Nvme10n1 : 1.20 213.76 13.36 0.00 0.00 255599.31 20680.25 298261.62 00:22:20.611 [2024-12-06T18:21:05.660Z] =================================================================================================================== 00:22:20.611 [2024-12-06T18:21:05.660Z] Total : 2188.67 136.79 0.00 0.00 268894.50 2572.89 298261.62 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:20.871 rmmod nvme_tcp 00:22:20.871 rmmod nvme_fabrics 00:22:20.871 rmmod nvme_keyring 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 260734 ']' 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 260734 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 260734 ']' 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 260734 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 260734 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 260734' 00:22:20.871 killing process with pid 260734 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 260734 00:22:20.871 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 260734 00:22:21.439 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:21.439 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:21.439 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:21.439 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:21.439 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:21.439 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:21.439 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:21.439 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:21.439 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:21.439 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.439 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.440 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:23.343 00:22:23.343 real 0m11.835s 00:22:23.343 user 0m33.971s 00:22:23.343 sys 0m3.388s 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:23.343 ************************************ 00:22:23.343 END TEST nvmf_shutdown_tc1 00:22:23.343 ************************************ 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:23.343 ************************************ 00:22:23.343 START TEST nvmf_shutdown_tc2 00:22:23.343 ************************************ 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:23.343 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:23.344 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:23.344 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:23.344 Found net devices under 0000:84:00.0: cvl_0_0 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:23.344 Found net devices under 0000:84:00.1: cvl_0_1 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:23.344 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:23.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:22:23.603 00:22:23.603 --- 10.0.0.2 ping statistics --- 00:22:23.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.603 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:22:23.603 00:22:23.603 --- 10.0.0.1 ping statistics --- 00:22:23.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.603 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=262560 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 262560 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 262560 ']' 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.603 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:23.603 [2024-12-06 19:21:08.606615] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:22:23.603 [2024-12-06 19:21:08.606716] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.862 [2024-12-06 19:21:08.682244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:23.862 [2024-12-06 19:21:08.741375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.862 [2024-12-06 19:21:08.741450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.862 [2024-12-06 19:21:08.741464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.862 [2024-12-06 19:21:08.741476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.862 [2024-12-06 19:21:08.741486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.862 [2024-12-06 19:21:08.743273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.862 [2024-12-06 19:21:08.743332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.862 [2024-12-06 19:21:08.743399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:23.862 [2024-12-06 19:21:08.743402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:23.862 [2024-12-06 19:21:08.893580] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.862 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:24.120 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.120 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:24.120 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.120 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:24.120 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.120 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:24.120 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.120 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:24.121 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.121 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:24.121 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.121 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:24.121 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.121 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:24.121 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.121 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:24.121 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:24.121 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.121 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:24.121 Malloc1 00:22:24.121 [2024-12-06 19:21:08.982769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.121 Malloc2 00:22:24.121 Malloc3 00:22:24.121 Malloc4 00:22:24.121 Malloc5 00:22:24.378 Malloc6 00:22:24.378 Malloc7 00:22:24.378 Malloc8 00:22:24.378 Malloc9 00:22:24.378 Malloc10 00:22:24.378 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.378 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:24.378 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:24.378 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=262768 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 262768 /var/tmp/bdevperf.sock 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 262768 ']' 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:24.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.638 { 00:22:24.638 "params": { 00:22:24.638 "name": "Nvme$subsystem", 00:22:24.638 "trtype": "$TEST_TRANSPORT", 00:22:24.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.638 "adrfam": "ipv4", 00:22:24.638 "trsvcid": "$NVMF_PORT", 00:22:24.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.638 "hdgst": ${hdgst:-false}, 00:22:24.638 "ddgst": ${ddgst:-false} 00:22:24.638 }, 00:22:24.638 "method": "bdev_nvme_attach_controller" 00:22:24.638 } 00:22:24.638 EOF 00:22:24.638 )") 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.638 { 00:22:24.638 "params": { 00:22:24.638 "name": "Nvme$subsystem", 00:22:24.638 "trtype": "$TEST_TRANSPORT", 00:22:24.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.638 "adrfam": "ipv4", 00:22:24.638 "trsvcid": "$NVMF_PORT", 00:22:24.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.638 "hdgst": ${hdgst:-false}, 00:22:24.638 "ddgst": ${ddgst:-false} 00:22:24.638 }, 00:22:24.638 "method": "bdev_nvme_attach_controller" 00:22:24.638 } 00:22:24.638 EOF 00:22:24.638 )") 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.638 { 00:22:24.638 "params": { 00:22:24.638 "name": "Nvme$subsystem", 00:22:24.638 "trtype": "$TEST_TRANSPORT", 00:22:24.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.638 "adrfam": "ipv4", 00:22:24.638 "trsvcid": "$NVMF_PORT", 00:22:24.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.638 "hdgst": ${hdgst:-false}, 00:22:24.638 "ddgst": ${ddgst:-false} 00:22:24.638 }, 00:22:24.638 "method": "bdev_nvme_attach_controller" 00:22:24.638 } 00:22:24.638 EOF 00:22:24.638 )") 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.638 { 00:22:24.638 "params": { 00:22:24.638 "name": "Nvme$subsystem", 00:22:24.638 "trtype": "$TEST_TRANSPORT", 00:22:24.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.638 "adrfam": "ipv4", 00:22:24.638 "trsvcid": "$NVMF_PORT", 00:22:24.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.638 "hdgst": ${hdgst:-false}, 00:22:24.638 "ddgst": ${ddgst:-false} 00:22:24.638 }, 00:22:24.638 "method": "bdev_nvme_attach_controller" 00:22:24.638 } 00:22:24.638 EOF 00:22:24.638 )") 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.638 { 00:22:24.638 "params": { 00:22:24.638 "name": "Nvme$subsystem", 00:22:24.638 "trtype": "$TEST_TRANSPORT", 00:22:24.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.638 "adrfam": "ipv4", 00:22:24.638 "trsvcid": "$NVMF_PORT", 00:22:24.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.638 "hdgst": ${hdgst:-false}, 00:22:24.638 "ddgst": ${ddgst:-false} 00:22:24.638 }, 00:22:24.638 "method": "bdev_nvme_attach_controller" 00:22:24.638 } 00:22:24.638 EOF 00:22:24.638 )") 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.638 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.638 { 00:22:24.638 "params": { 00:22:24.638 "name": "Nvme$subsystem", 00:22:24.638 "trtype": "$TEST_TRANSPORT", 00:22:24.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.638 "adrfam": "ipv4", 00:22:24.638 "trsvcid": "$NVMF_PORT", 00:22:24.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.638 "hdgst": ${hdgst:-false}, 00:22:24.638 "ddgst": ${ddgst:-false} 00:22:24.638 }, 00:22:24.638 "method": "bdev_nvme_attach_controller" 00:22:24.638 } 00:22:24.638 EOF 00:22:24.638 )") 00:22:24.639 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:24.639 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.639 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.639 { 00:22:24.639 "params": { 00:22:24.639 "name": "Nvme$subsystem", 00:22:24.639 "trtype": "$TEST_TRANSPORT", 00:22:24.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.639 "adrfam": "ipv4", 00:22:24.639 "trsvcid": "$NVMF_PORT", 00:22:24.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.639 "hdgst": ${hdgst:-false}, 00:22:24.639 "ddgst": ${ddgst:-false} 00:22:24.639 }, 00:22:24.639 "method": "bdev_nvme_attach_controller" 00:22:24.639 } 00:22:24.639 EOF 00:22:24.639 )") 00:22:24.639 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:24.639 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.639 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.639 { 00:22:24.639 "params": { 00:22:24.639 "name": "Nvme$subsystem", 00:22:24.639 "trtype": "$TEST_TRANSPORT", 00:22:24.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.639 "adrfam": "ipv4", 00:22:24.639 "trsvcid": "$NVMF_PORT", 00:22:24.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.639 "hdgst": ${hdgst:-false}, 00:22:24.639 "ddgst": ${ddgst:-false} 00:22:24.639 }, 00:22:24.639 "method": "bdev_nvme_attach_controller" 00:22:24.639 } 00:22:24.639 EOF 00:22:24.639 )") 00:22:24.639 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:24.639 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.639 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.639 { 00:22:24.639 "params": { 00:22:24.639 "name": "Nvme$subsystem", 00:22:24.639 "trtype": "$TEST_TRANSPORT", 00:22:24.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.639 "adrfam": "ipv4", 00:22:24.639 "trsvcid": "$NVMF_PORT", 00:22:24.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.639 "hdgst": ${hdgst:-false}, 00:22:24.639 "ddgst": ${ddgst:-false} 00:22:24.639 }, 00:22:24.639 "method": "bdev_nvme_attach_controller" 00:22:24.639 } 00:22:24.639 EOF 00:22:24.639 )") 00:22:24.639 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:24.639 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.639 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.639 { 00:22:24.639 "params": { 00:22:24.639 "name": "Nvme$subsystem", 00:22:24.639 "trtype": "$TEST_TRANSPORT", 00:22:24.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.639 "adrfam": "ipv4", 00:22:24.639 "trsvcid": "$NVMF_PORT", 00:22:24.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.639 "hdgst": ${hdgst:-false}, 00:22:24.639 "ddgst": ${ddgst:-false} 00:22:24.639 }, 00:22:24.639 "method": "bdev_nvme_attach_controller" 00:22:24.639 } 00:22:24.639 EOF 00:22:24.639 )") 00:22:24.639 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:24.639 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:24.639 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:24.639 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:24.639 "params": { 00:22:24.639 "name": "Nvme1", 00:22:24.639 "trtype": "tcp", 00:22:24.639 "traddr": "10.0.0.2", 00:22:24.639 "adrfam": "ipv4", 00:22:24.639 "trsvcid": "4420", 00:22:24.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.639 "hdgst": false, 00:22:24.639 "ddgst": false 00:22:24.639 }, 00:22:24.639 "method": "bdev_nvme_attach_controller" 00:22:24.639 },{ 00:22:24.639 "params": { 00:22:24.639 "name": "Nvme2", 00:22:24.639 "trtype": "tcp", 00:22:24.639 "traddr": "10.0.0.2", 00:22:24.639 "adrfam": "ipv4", 00:22:24.639 "trsvcid": "4420", 00:22:24.639 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:24.639 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:24.639 "hdgst": false, 00:22:24.639 "ddgst": false 00:22:24.639 }, 00:22:24.639 "method": "bdev_nvme_attach_controller" 00:22:24.639 },{ 00:22:24.639 "params": { 00:22:24.639 "name": "Nvme3", 00:22:24.639 "trtype": "tcp", 00:22:24.639 "traddr": "10.0.0.2", 00:22:24.639 "adrfam": "ipv4", 00:22:24.639 "trsvcid": "4420", 00:22:24.639 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:24.639 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:24.639 "hdgst": false, 00:22:24.639 "ddgst": false 00:22:24.639 }, 00:22:24.639 "method": "bdev_nvme_attach_controller" 00:22:24.639 },{ 00:22:24.639 "params": { 00:22:24.639 "name": "Nvme4", 00:22:24.639 "trtype": "tcp", 00:22:24.639 "traddr": "10.0.0.2", 00:22:24.639 "adrfam": "ipv4", 00:22:24.639 "trsvcid": "4420", 00:22:24.639 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:24.639 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:24.639 "hdgst": false, 00:22:24.639 "ddgst": false 00:22:24.639 }, 00:22:24.639 "method": "bdev_nvme_attach_controller" 00:22:24.639 },{ 00:22:24.639 "params": { 00:22:24.639 "name": "Nvme5", 00:22:24.639 "trtype": "tcp", 00:22:24.639 "traddr": "10.0.0.2", 00:22:24.639 "adrfam": "ipv4", 00:22:24.639 "trsvcid": "4420", 00:22:24.639 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:24.639 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:24.639 "hdgst": false, 00:22:24.639 "ddgst": false 00:22:24.639 }, 00:22:24.639 "method": "bdev_nvme_attach_controller" 00:22:24.639 },{ 00:22:24.639 "params": { 00:22:24.639 "name": "Nvme6", 00:22:24.639 "trtype": "tcp", 00:22:24.639 "traddr": "10.0.0.2", 00:22:24.639 "adrfam": "ipv4", 00:22:24.639 "trsvcid": "4420", 00:22:24.639 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:24.639 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:24.639 "hdgst": false, 00:22:24.639 "ddgst": false 00:22:24.639 }, 00:22:24.639 "method": "bdev_nvme_attach_controller" 00:22:24.640 },{ 00:22:24.640 "params": { 00:22:24.640 "name": "Nvme7", 00:22:24.640 "trtype": "tcp", 00:22:24.640 "traddr": "10.0.0.2", 00:22:24.640 "adrfam": "ipv4", 00:22:24.640 "trsvcid": "4420", 00:22:24.640 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:24.640 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:24.640 "hdgst": false, 00:22:24.640 "ddgst": false 00:22:24.640 }, 00:22:24.640 "method": "bdev_nvme_attach_controller" 00:22:24.640 },{ 00:22:24.640 "params": { 00:22:24.640 "name": "Nvme8", 00:22:24.640 "trtype": "tcp", 00:22:24.640 "traddr": "10.0.0.2", 00:22:24.640 "adrfam": "ipv4", 00:22:24.640 "trsvcid": "4420", 00:22:24.640 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:24.640 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:24.640 "hdgst": false, 00:22:24.640 "ddgst": false 00:22:24.640 }, 00:22:24.640 "method": "bdev_nvme_attach_controller" 00:22:24.640 },{ 00:22:24.640 "params": { 00:22:24.640 "name": "Nvme9", 00:22:24.640 "trtype": "tcp", 00:22:24.640 "traddr": "10.0.0.2", 00:22:24.640 "adrfam": "ipv4", 00:22:24.640 "trsvcid": "4420", 00:22:24.640 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:24.640 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:24.640 "hdgst": false, 00:22:24.640 "ddgst": false 00:22:24.640 }, 00:22:24.640 "method": "bdev_nvme_attach_controller" 00:22:24.640 },{ 00:22:24.640 "params": { 00:22:24.640 "name": "Nvme10", 00:22:24.640 "trtype": "tcp", 00:22:24.640 "traddr": "10.0.0.2", 00:22:24.640 "adrfam": "ipv4", 00:22:24.640 "trsvcid": "4420", 00:22:24.640 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:24.640 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:24.640 "hdgst": false, 00:22:24.640 "ddgst": false 00:22:24.640 }, 00:22:24.640 "method": "bdev_nvme_attach_controller" 00:22:24.640 }' 00:22:24.640 [2024-12-06 19:21:09.485845] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:22:24.640 [2024-12-06 19:21:09.485920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid262768 ] 00:22:24.640 [2024-12-06 19:21:09.559798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.640 [2024-12-06 19:21:09.620211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.538 Running I/O for 10 seconds... 00:22:26.538 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.538 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:26.538 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:26.538 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.538 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.538 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.538 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:26.538 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:26.538 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:26.538 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:26.538 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:26.538 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:26.538 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:26.538 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:26.538 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:26.538 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.538 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.796 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.796 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:26.796 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:26.796 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:27.055 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:27.055 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:27.055 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:27.055 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:27.055 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.055 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.055 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.055 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:27.055 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:27.055 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:27.313 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:27.313 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:27.313 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:27.313 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:27.313 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.313 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:27.313 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.313 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:27.313 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:27.313 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:27.313 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:27.313 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:27.313 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 262768 00:22:27.313 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 262768 ']' 00:22:27.314 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 262768 00:22:27.314 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:27.314 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.314 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 262768 00:22:27.314 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:27.314 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:27.314 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 262768' 00:22:27.314 killing process with pid 262768 00:22:27.314 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 262768 00:22:27.314 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 262768 00:22:27.314 Received shutdown signal, test time was about 0.988518 seconds 00:22:27.314 00:22:27.314 Latency(us) 00:22:27.314 [2024-12-06T18:21:12.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.314 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.314 Verification LBA range: start 0x0 length 0x400 00:22:27.314 Nvme1n1 : 0.99 259.19 16.20 0.00 0.00 243636.15 19903.53 240784.12 00:22:27.314 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.314 Verification LBA range: start 0x0 length 0x400 00:22:27.314 Nvme2n1 : 0.95 219.20 13.70 0.00 0.00 277078.15 8058.50 245444.46 00:22:27.314 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.314 Verification LBA range: start 0x0 length 0x400 00:22:27.314 Nvme3n1 : 0.98 260.09 16.26 0.00 0.00 233904.36 16893.72 268746.15 00:22:27.314 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.314 Verification LBA range: start 0x0 length 0x400 00:22:27.314 Nvme4n1 : 0.98 261.23 16.33 0.00 0.00 228472.60 18350.08 256318.58 00:22:27.314 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.314 Verification LBA range: start 0x0 length 0x400 00:22:27.314 Nvme5n1 : 0.96 199.33 12.46 0.00 0.00 293084.98 20291.89 274959.93 00:22:27.314 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.314 Verification LBA range: start 0x0 length 0x400 00:22:27.314 Nvme6n1 : 0.95 203.12 12.69 0.00 0.00 281025.80 23884.23 267192.70 00:22:27.314 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.314 Verification LBA range: start 0x0 length 0x400 00:22:27.314 Nvme7n1 : 0.93 205.54 12.85 0.00 0.00 270761.28 38059.43 265639.25 00:22:27.314 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.314 Verification LBA range: start 0x0 length 0x400 00:22:27.314 Nvme8n1 : 0.95 201.94 12.62 0.00 0.00 270313.75 19418.07 267192.70 00:22:27.314 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.314 Verification LBA range: start 0x0 length 0x400 00:22:27.314 Nvme9n1 : 0.97 197.80 12.36 0.00 0.00 271272.26 22330.79 282727.16 00:22:27.314 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.314 Verification LBA range: start 0x0 length 0x400 00:22:27.314 Nvme10n1 : 0.97 196.95 12.31 0.00 0.00 266751.75 20486.07 299815.06 00:22:27.314 [2024-12-06T18:21:12.363Z] =================================================================================================================== 00:22:27.314 [2024-12-06T18:21:12.363Z] Total : 2204.38 137.77 0.00 0.00 261185.99 8058.50 299815.06 00:22:27.572 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 262560 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:28.945 rmmod nvme_tcp 00:22:28.945 rmmod nvme_fabrics 00:22:28.945 rmmod nvme_keyring 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 262560 ']' 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 262560 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 262560 ']' 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 262560 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 262560 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 262560' 00:22:28.945 killing process with pid 262560 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 262560 00:22:28.945 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 262560 00:22:29.205 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:29.205 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:29.205 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:29.205 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:29.205 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:29.205 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:29.205 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:29.205 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:29.205 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:29.205 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.205 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.205 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.751 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:31.751 00:22:31.751 real 0m7.872s 00:22:31.751 user 0m24.418s 00:22:31.751 sys 0m1.567s 00:22:31.751 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:31.751 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:31.751 ************************************ 00:22:31.751 END TEST nvmf_shutdown_tc2 00:22:31.751 ************************************ 00:22:31.751 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:31.751 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:31.751 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:31.751 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:31.751 ************************************ 00:22:31.751 START TEST nvmf_shutdown_tc3 00:22:31.751 ************************************ 00:22:31.751 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:31.751 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:31.751 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:31.751 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:31.751 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:31.752 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:31.752 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:31.752 Found net devices under 0000:84:00.0: cvl_0_0 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.752 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:31.753 Found net devices under 0000:84:00.1: cvl_0_1 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:31.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:22:31.753 00:22:31.753 --- 10.0.0.2 ping statistics --- 00:22:31.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.753 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:22:31.753 00:22:31.753 --- 10.0.0.1 ping statistics --- 00:22:31.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.753 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=263804 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 263804 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 263804 ']' 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.753 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:31.753 [2024-12-06 19:21:16.496200] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:22:31.753 [2024-12-06 19:21:16.496286] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.753 [2024-12-06 19:21:16.567477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.753 [2024-12-06 19:21:16.622212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.753 [2024-12-06 19:21:16.622275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.753 [2024-12-06 19:21:16.622305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.754 [2024-12-06 19:21:16.622317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.754 [2024-12-06 19:21:16.622326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.754 [2024-12-06 19:21:16.623875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.754 [2024-12-06 19:21:16.623938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.754 [2024-12-06 19:21:16.624005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:31.754 [2024-12-06 19:21:16.624009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:31.754 [2024-12-06 19:21:16.775588] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:31.754 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:32.012 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.012 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:32.012 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.012 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:32.012 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.012 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:32.012 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.012 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:32.012 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.012 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:32.012 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.012 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:32.012 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.012 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:32.012 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:32.012 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.012 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:32.012 Malloc1 00:22:32.012 [2024-12-06 19:21:16.882233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.012 Malloc2 00:22:32.012 Malloc3 00:22:32.012 Malloc4 00:22:32.012 Malloc5 00:22:32.272 Malloc6 00:22:32.272 Malloc7 00:22:32.272 Malloc8 00:22:32.272 Malloc9 00:22:32.272 Malloc10 00:22:32.531 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.531 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:32.531 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:32.531 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:32.531 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=263867 00:22:32.531 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 263867 /var/tmp/bdevperf.sock 00:22:32.531 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 263867 ']' 00:22:32.531 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.531 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:32.531 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:32.531 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.532 { 00:22:32.532 "params": { 00:22:32.532 "name": "Nvme$subsystem", 00:22:32.532 "trtype": "$TEST_TRANSPORT", 00:22:32.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.532 "adrfam": "ipv4", 00:22:32.532 "trsvcid": "$NVMF_PORT", 00:22:32.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.532 "hdgst": ${hdgst:-false}, 00:22:32.532 "ddgst": ${ddgst:-false} 00:22:32.532 }, 00:22:32.532 "method": "bdev_nvme_attach_controller" 00:22:32.532 } 00:22:32.532 EOF 00:22:32.532 )") 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.532 { 00:22:32.532 "params": { 00:22:32.532 "name": "Nvme$subsystem", 00:22:32.532 "trtype": "$TEST_TRANSPORT", 00:22:32.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.532 "adrfam": "ipv4", 00:22:32.532 "trsvcid": "$NVMF_PORT", 00:22:32.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.532 "hdgst": ${hdgst:-false}, 00:22:32.532 "ddgst": ${ddgst:-false} 00:22:32.532 }, 00:22:32.532 "method": "bdev_nvme_attach_controller" 00:22:32.532 } 00:22:32.532 EOF 00:22:32.532 )") 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.532 { 00:22:32.532 "params": { 00:22:32.532 "name": "Nvme$subsystem", 00:22:32.532 "trtype": "$TEST_TRANSPORT", 00:22:32.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.532 "adrfam": "ipv4", 00:22:32.532 "trsvcid": "$NVMF_PORT", 00:22:32.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.532 "hdgst": ${hdgst:-false}, 00:22:32.532 "ddgst": ${ddgst:-false} 00:22:32.532 }, 00:22:32.532 "method": "bdev_nvme_attach_controller" 00:22:32.532 } 00:22:32.532 EOF 00:22:32.532 )") 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.532 { 00:22:32.532 "params": { 00:22:32.532 "name": "Nvme$subsystem", 00:22:32.532 "trtype": "$TEST_TRANSPORT", 00:22:32.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.532 "adrfam": "ipv4", 00:22:32.532 "trsvcid": "$NVMF_PORT", 00:22:32.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.532 "hdgst": ${hdgst:-false}, 00:22:32.532 "ddgst": ${ddgst:-false} 00:22:32.532 }, 00:22:32.532 "method": "bdev_nvme_attach_controller" 00:22:32.532 } 00:22:32.532 EOF 00:22:32.532 )") 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.532 { 00:22:32.532 "params": { 00:22:32.532 "name": "Nvme$subsystem", 00:22:32.532 "trtype": "$TEST_TRANSPORT", 00:22:32.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.532 "adrfam": "ipv4", 00:22:32.532 "trsvcid": "$NVMF_PORT", 00:22:32.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.532 "hdgst": ${hdgst:-false}, 00:22:32.532 "ddgst": ${ddgst:-false} 00:22:32.532 }, 00:22:32.532 "method": "bdev_nvme_attach_controller" 00:22:32.532 } 00:22:32.532 EOF 00:22:32.532 )") 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.532 { 00:22:32.532 "params": { 00:22:32.532 "name": "Nvme$subsystem", 00:22:32.532 "trtype": "$TEST_TRANSPORT", 00:22:32.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.532 "adrfam": "ipv4", 00:22:32.532 "trsvcid": "$NVMF_PORT", 00:22:32.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.532 "hdgst": ${hdgst:-false}, 00:22:32.532 "ddgst": ${ddgst:-false} 00:22:32.532 }, 00:22:32.532 "method": "bdev_nvme_attach_controller" 00:22:32.532 } 00:22:32.532 EOF 00:22:32.532 )") 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.532 { 00:22:32.532 "params": { 00:22:32.532 "name": "Nvme$subsystem", 00:22:32.532 "trtype": "$TEST_TRANSPORT", 00:22:32.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.532 "adrfam": "ipv4", 00:22:32.532 "trsvcid": "$NVMF_PORT", 00:22:32.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.532 "hdgst": ${hdgst:-false}, 00:22:32.532 "ddgst": ${ddgst:-false} 00:22:32.532 }, 00:22:32.532 "method": "bdev_nvme_attach_controller" 00:22:32.532 } 00:22:32.532 EOF 00:22:32.532 )") 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.532 { 00:22:32.532 "params": { 00:22:32.532 "name": "Nvme$subsystem", 00:22:32.532 "trtype": "$TEST_TRANSPORT", 00:22:32.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.532 "adrfam": "ipv4", 00:22:32.532 "trsvcid": "$NVMF_PORT", 00:22:32.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.532 "hdgst": ${hdgst:-false}, 00:22:32.532 "ddgst": ${ddgst:-false} 00:22:32.532 }, 00:22:32.532 "method": "bdev_nvme_attach_controller" 00:22:32.532 } 00:22:32.532 EOF 00:22:32.532 )") 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.532 { 00:22:32.532 "params": { 00:22:32.532 "name": "Nvme$subsystem", 00:22:32.532 "trtype": "$TEST_TRANSPORT", 00:22:32.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.532 "adrfam": "ipv4", 00:22:32.532 "trsvcid": "$NVMF_PORT", 00:22:32.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.532 "hdgst": ${hdgst:-false}, 00:22:32.532 "ddgst": ${ddgst:-false} 00:22:32.532 }, 00:22:32.532 "method": "bdev_nvme_attach_controller" 00:22:32.532 } 00:22:32.532 EOF 00:22:32.532 )") 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:32.532 { 00:22:32.532 "params": { 00:22:32.532 "name": "Nvme$subsystem", 00:22:32.532 "trtype": "$TEST_TRANSPORT", 00:22:32.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.532 "adrfam": "ipv4", 00:22:32.532 "trsvcid": "$NVMF_PORT", 00:22:32.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.532 "hdgst": ${hdgst:-false}, 00:22:32.532 "ddgst": ${ddgst:-false} 00:22:32.532 }, 00:22:32.532 "method": "bdev_nvme_attach_controller" 00:22:32.532 } 00:22:32.532 EOF 00:22:32.532 )") 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:32.532 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:32.533 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:32.533 "params": { 00:22:32.533 "name": "Nvme1", 00:22:32.533 "trtype": "tcp", 00:22:32.533 "traddr": "10.0.0.2", 00:22:32.533 "adrfam": "ipv4", 00:22:32.533 "trsvcid": "4420", 00:22:32.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.533 "hdgst": false, 00:22:32.533 "ddgst": false 00:22:32.533 }, 00:22:32.533 "method": "bdev_nvme_attach_controller" 00:22:32.533 },{ 00:22:32.533 "params": { 00:22:32.533 "name": "Nvme2", 00:22:32.533 "trtype": "tcp", 00:22:32.533 "traddr": "10.0.0.2", 00:22:32.533 "adrfam": "ipv4", 00:22:32.533 "trsvcid": "4420", 00:22:32.533 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:32.533 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:32.533 "hdgst": false, 00:22:32.533 "ddgst": false 00:22:32.533 }, 00:22:32.533 "method": "bdev_nvme_attach_controller" 00:22:32.533 },{ 00:22:32.533 "params": { 00:22:32.533 "name": "Nvme3", 00:22:32.533 "trtype": "tcp", 00:22:32.533 "traddr": "10.0.0.2", 00:22:32.533 "adrfam": "ipv4", 00:22:32.533 "trsvcid": "4420", 00:22:32.533 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:32.533 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:32.533 "hdgst": false, 00:22:32.533 "ddgst": false 00:22:32.533 }, 00:22:32.533 "method": "bdev_nvme_attach_controller" 00:22:32.533 },{ 00:22:32.533 "params": { 00:22:32.533 "name": "Nvme4", 00:22:32.533 "trtype": "tcp", 00:22:32.533 "traddr": "10.0.0.2", 00:22:32.533 "adrfam": "ipv4", 00:22:32.533 "trsvcid": "4420", 00:22:32.533 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:32.533 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:32.533 "hdgst": false, 00:22:32.533 "ddgst": false 00:22:32.533 }, 00:22:32.533 "method": "bdev_nvme_attach_controller" 00:22:32.533 },{ 00:22:32.533 "params": { 00:22:32.533 "name": "Nvme5", 00:22:32.533 "trtype": "tcp", 00:22:32.533 "traddr": "10.0.0.2", 00:22:32.533 "adrfam": "ipv4", 00:22:32.533 "trsvcid": "4420", 00:22:32.533 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:32.533 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:32.533 "hdgst": false, 00:22:32.533 "ddgst": false 00:22:32.533 }, 00:22:32.533 "method": "bdev_nvme_attach_controller" 00:22:32.533 },{ 00:22:32.533 "params": { 00:22:32.533 "name": "Nvme6", 00:22:32.533 "trtype": "tcp", 00:22:32.533 "traddr": "10.0.0.2", 00:22:32.533 "adrfam": "ipv4", 00:22:32.533 "trsvcid": "4420", 00:22:32.533 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:32.533 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:32.533 "hdgst": false, 00:22:32.533 "ddgst": false 00:22:32.533 }, 00:22:32.533 "method": "bdev_nvme_attach_controller" 00:22:32.533 },{ 00:22:32.533 "params": { 00:22:32.533 "name": "Nvme7", 00:22:32.533 "trtype": "tcp", 00:22:32.533 "traddr": "10.0.0.2", 00:22:32.533 "adrfam": "ipv4", 00:22:32.533 "trsvcid": "4420", 00:22:32.533 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:32.533 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:32.533 "hdgst": false, 00:22:32.533 "ddgst": false 00:22:32.533 }, 00:22:32.533 "method": "bdev_nvme_attach_controller" 00:22:32.533 },{ 00:22:32.533 "params": { 00:22:32.533 "name": "Nvme8", 00:22:32.533 "trtype": "tcp", 00:22:32.533 "traddr": "10.0.0.2", 00:22:32.533 "adrfam": "ipv4", 00:22:32.533 "trsvcid": "4420", 00:22:32.533 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:32.533 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:32.533 "hdgst": false, 00:22:32.533 "ddgst": false 00:22:32.533 }, 00:22:32.533 "method": "bdev_nvme_attach_controller" 00:22:32.533 },{ 00:22:32.533 "params": { 00:22:32.533 "name": "Nvme9", 00:22:32.533 "trtype": "tcp", 00:22:32.533 "traddr": "10.0.0.2", 00:22:32.533 "adrfam": "ipv4", 00:22:32.533 "trsvcid": "4420", 00:22:32.533 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:32.533 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:32.533 "hdgst": false, 00:22:32.533 "ddgst": false 00:22:32.533 }, 00:22:32.533 "method": "bdev_nvme_attach_controller" 00:22:32.533 },{ 00:22:32.533 "params": { 00:22:32.533 "name": "Nvme10", 00:22:32.533 "trtype": "tcp", 00:22:32.533 "traddr": "10.0.0.2", 00:22:32.533 "adrfam": "ipv4", 00:22:32.533 "trsvcid": "4420", 00:22:32.533 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:32.533 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:32.533 "hdgst": false, 00:22:32.533 "ddgst": false 00:22:32.533 }, 00:22:32.533 "method": "bdev_nvme_attach_controller" 00:22:32.533 }' 00:22:32.533 [2024-12-06 19:21:17.409095] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:22:32.533 [2024-12-06 19:21:17.409168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid263867 ] 00:22:32.533 [2024-12-06 19:21:17.483637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.533 [2024-12-06 19:21:17.544585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.434 Running I/O for 10 seconds... 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:34.692 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:34.950 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:34.950 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:34.950 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:34.950 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.950 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:34.950 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:34.950 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.950 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:34.950 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:34.950 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:35.208 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:35.208 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:35.208 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:35.208 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:35.208 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.208 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:35.483 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.483 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:35.483 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:35.483 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:35.483 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:35.483 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:35.483 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 263804 00:22:35.483 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 263804 ']' 00:22:35.483 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 263804 00:22:35.483 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:35.483 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.483 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 263804 00:22:35.483 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:35.483 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:35.483 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 263804' 00:22:35.483 killing process with pid 263804 00:22:35.483 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 263804 00:22:35.483 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 263804 00:22:35.483 [2024-12-06 19:21:20.322749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.483 [2024-12-06 19:21:20.322913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.483 [2024-12-06 19:21:20.322931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.483 [2024-12-06 19:21:20.322944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.483 [2024-12-06 19:21:20.322957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.483 [2024-12-06 19:21:20.322969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.483 [2024-12-06 19:21:20.322981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.483 [2024-12-06 19:21:20.322994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.483 [2024-12-06 19:21:20.323017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.483 [2024-12-06 19:21:20.323029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.483 [2024-12-06 19:21:20.323041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.323681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391310 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-06 19:21:20.325623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:35.484 the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.484 [2024-12-06 19:21:20.325664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.484 [2024-12-06 19:21:20.325684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.484 [2024-12-06 19:21:20.325700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.325712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.325710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.485 [2024-12-06 19:21:20.325760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.325774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.325771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.485 [2024-12-06 19:21:20.325787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.325807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.485 [2024-12-06 19:21:20.325824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.325837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with [2024-12-06 19:21:20.325833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(6) to be set 00:22:35.485 id:0 cdw10:00000000 cdw11:00000000 00:22:35.485 [2024-12-06 19:21:20.325853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.325860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-06 19:21:20.325865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.485 the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.325881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.325885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6e910 is same w[2024-12-06 19:21:20.325893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with ith the state(6) to be set 00:22:35.485 the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.325907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.325918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.325930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.325942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.325954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.325966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.325977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.325989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.326001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.326012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.326024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.326035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.326063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.326076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.326087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.326098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.326110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.326125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.326137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.326149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.326160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.326171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.326182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.326194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122760 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.327958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.485 [2024-12-06 19:21:20.327997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.485 [2024-12-06 19:21:20.328053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.485 [2024-12-06 19:21:20.328080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.485 [2024-12-06 19:21:20.328109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.485 [2024-12-06 19:21:20.328134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.485 [2024-12-06 19:21:20.328164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.485 [2024-12-06 19:21:20.328188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.485 [2024-12-06 19:21:20.328216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.485 [2024-12-06 19:21:20.328242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.485 [2024-12-06 19:21:20.328270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.485 [2024-12-06 19:21:20.328295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.485 [2024-12-06 19:21:20.328325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.485 [2024-12-06 19:21:20.328350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.485 [2024-12-06 19:21:20.328378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.485 [2024-12-06 19:21:20.328401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.485 [2024-12-06 19:21:20.328429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.485 [2024-12-06 19:21:20.328455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.485 [2024-12-06 19:21:20.328492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.485 [2024-12-06 19:21:20.328517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.485 [2024-12-06 19:21:20.328529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.328545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.485 [2024-12-06 19:21:20.328566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.328570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.485 [2024-12-06 19:21:20.328582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.328595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.328599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.485 [2024-12-06 19:21:20.328608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.328622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.328623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.485 [2024-12-06 19:21:20.328634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.328647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.328652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128[2024-12-06 19:21:20.328659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.485 the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.328674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.328678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.485 [2024-12-06 19:21:20.328687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.328701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.328713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with [2024-12-06 19:21:20.328708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128the state(6) to be set 00:22:35.485 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.485 [2024-12-06 19:21:20.328737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.485 [2024-12-06 19:21:20.328744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 19:21:20.328751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.328767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.328784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.328789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.328803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.328816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.328815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.328828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.328841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.328845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:1[2024-12-06 19:21:20.328853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.328867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.328872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.328880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.328893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.328905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.328902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.328918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.328930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.328929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.328942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.328953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.328957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1[2024-12-06 19:21:20.328965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.328985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.328992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.329007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.329035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.329034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1[2024-12-06 19:21:20.329046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.329061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.329067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 19:21:20.329073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.329088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.329100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.329096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.329112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.329124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2391cb0 is same with the state(6) to be set 00:22:35.486 [2024-12-06 19:21:20.329124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.329152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.329178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.329205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.329230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.329258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.329283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.329320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.329344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.329371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.329394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.329421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.329446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.329470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.329495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.329522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.329546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.329579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.329609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.329636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.329661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.329687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.329712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.329771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.329797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.329824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.329849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.329875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.329900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.329927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.329952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.329978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.330004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.330046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.330071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.330097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.330122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.330147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.330172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.330198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.330223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.486 [2024-12-06 19:21:20.330249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.486 [2024-12-06 19:21:20.330278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.330305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:1[2024-12-06 19:21:20.330299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23921a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.330333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 19:21:20.330336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23921a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.330355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23921a0 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.330362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-12-06 19:21:20.330368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23921a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.330388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.330415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 [2024-12-06 19:21:20.330438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.330466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 [2024-12-06 19:21:20.330496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.330526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 [2024-12-06 19:21:20.330549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.330577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 [2024-12-06 19:21:20.330600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.330634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 [2024-12-06 19:21:20.330659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.330685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 [2024-12-06 19:21:20.330730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.330761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 [2024-12-06 19:21:20.330792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.330821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 [2024-12-06 19:21:20.330849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.330882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 [2024-12-06 19:21:20.330907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.330934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 [2024-12-06 19:21:20.330960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.330986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 [2024-12-06 19:21:20.331011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 19:21:20.331008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 [2024-12-06 19:21:20.331069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.331093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128[2024-12-06 19:21:20.331117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with [2024-12-06 19:21:20.331137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:35.487 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.331157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 [2024-12-06 19:21:20.331181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.331204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 [2024-12-06 19:21:20.331233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.331256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128[2024-12-06 19:21:20.331279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.331304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with [2024-12-06 19:21:20.331323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128the state(6) to be set 00:22:35.487 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 [2024-12-06 19:21:20.331342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 19:21:20.331353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 [2024-12-06 19:21:20.331391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 19:21:20.331414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:12[2024-12-06 19:21:20.331443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.487 the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.487 [2024-12-06 19:21:20.331470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.487 [2024-12-06 19:21:20.331483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:12[2024-12-06 19:21:20.331495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.488 the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 19:21:20.331523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.488 the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:12[2024-12-06 19:21:20.331560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.488 the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 19:21:20.331585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.488 the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:35.488 [2024-12-06 19:21:20.331657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.331870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392670 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.332806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.488 [2024-12-06 19:21:20.332839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.488 [2024-12-06 19:21:20.332874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.488 [2024-12-06 19:21:20.332900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.488 [2024-12-06 19:21:20.332929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.488 [2024-12-06 19:21:20.332954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.488 [2024-12-06 19:21:20.332983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.488 [2024-12-06 19:21:20.333007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.488 [2024-12-06 19:21:20.333036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.488 [2024-12-06 19:21:20.333061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.488 [2024-12-06 19:21:20.333089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.488 [2024-12-06 19:21:20.333113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.488 [2024-12-06 19:21:20.333135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.333156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.488 [2024-12-06 19:21:20.333175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.333181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.488 [2024-12-06 19:21:20.333194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.333207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.333210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1[2024-12-06 19:21:20.333218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.488 the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.333232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.333236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 19:21:20.333243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.488 the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.333256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.333267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.333264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.488 [2024-12-06 19:21:20.333279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.333290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.333290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.488 [2024-12-06 19:21:20.333302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.333313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.333324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with [2024-12-06 19:21:20.333318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1the state(6) to be set 00:22:35.488 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.488 [2024-12-06 19:21:20.333338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.488 [2024-12-06 19:21:20.333349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with [2024-12-06 19:21:20.333345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:35.488 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.488 [2024-12-06 19:21:20.333363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-12-06 19:21:20.333386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with [2024-12-06 19:21:20.333399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:35.489 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.489 [2024-12-06 19:21:20.333418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-12-06 19:21:20.333442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.489 [2024-12-06 19:21:20.333465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-12-06 19:21:20.333492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.489 [2024-12-06 19:21:20.333516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128[2024-12-06 19:21:20.333540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.489 [2024-12-06 19:21:20.333567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-12-06 19:21:20.333600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.489 [2024-12-06 19:21:20.333624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-12-06 19:21:20.333651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.489 [2024-12-06 19:21:20.333675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128[2024-12-06 19:21:20.333699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.489 [2024-12-06 19:21:20.333763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128[2024-12-06 19:21:20.333787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.489 [2024-12-06 19:21:20.333814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with [2024-12-06 19:21:20.333833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128the state(6) to be set 00:22:35.489 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-12-06 19:21:20.333861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.489 [2024-12-06 19:21:20.333873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128[2024-12-06 19:21:20.333898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 19:21:20.333931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.489 the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-12-06 19:21:20.333969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.333980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.489 [2024-12-06 19:21:20.333993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392b40 is same with the state(6) to be set 00:22:35.489 [2024-12-06 19:21:20.334009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-12-06 19:21:20.334033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.489 [2024-12-06 19:21:20.334062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-12-06 19:21:20.334086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.489 [2024-12-06 19:21:20.334114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-12-06 19:21:20.334137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.489 [2024-12-06 19:21:20.334165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-12-06 19:21:20.334190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.489 [2024-12-06 19:21:20.334218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-12-06 19:21:20.334241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.489 [2024-12-06 19:21:20.334269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-12-06 19:21:20.334292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.489 [2024-12-06 19:21:20.334321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.334345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.334387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.334415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.334442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.334465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.334491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.334514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.334541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.334564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.334590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.334614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.334639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.334662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.334687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.334753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.334786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.334809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.334837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.334861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.334889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.334914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.334942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.334966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.334993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.335017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.335045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.335069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.335101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.335126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.335153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.335177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.335203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.335227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.335252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.335291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.335315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.335339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.335371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.335396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.335420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.335445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.335472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.335485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.335510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.335536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.335560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-12-06 19:21:20.335589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with [2024-12-06 19:21:20.335598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:35.490 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.335616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:12[2024-12-06 19:21:20.335632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.335659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:12[2024-12-06 19:21:20.335684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.335711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:12[2024-12-06 19:21:20.335763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 [2024-12-06 19:21:20.335791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:12[2024-12-06 19:21:20.335816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 19:21:20.335845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.490 the state(6) to be set 00:22:35.490 [2024-12-06 19:21:20.335864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.335876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.335872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-12-06 19:21:20.335888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.335901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.335900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.491 [2024-12-06 19:21:20.335913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.335932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with [2024-12-06 19:21:20.335927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:12the state(6) to be set 00:22:35.491 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-12-06 19:21:20.335948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.335960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with [2024-12-06 19:21:20.335954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:35.491 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.491 [2024-12-06 19:21:20.335979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.335985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:12[2024-12-06 19:21:20.335992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.491 [2024-12-06 19:21:20.336020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with [2024-12-06 19:21:20.336039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:12the state(6) to be set 00:22:35.491 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-12-06 19:21:20.336060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-06 19:21:20.336073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.491 the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-12-06 19:21:20.336107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.491 [2024-12-06 19:21:20.336132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:12[2024-12-06 19:21:20.336156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.491 [2024-12-06 19:21:20.336181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:12[2024-12-06 19:21:20.336206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.491 [2024-12-06 19:21:20.336235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-12-06 19:21:20.336271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.491 [2024-12-06 19:21:20.336296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with [2024-12-06 19:21:20.336314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:12the state(6) to be set 00:22:35.491 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-12-06 19:21:20.336354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with [2024-12-06 19:21:20.336361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:35.491 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.491 [2024-12-06 19:21:20.336386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393010 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.336430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:35.491 [2024-12-06 19:21:20.337272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.491 [2024-12-06 19:21:20.337304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.491 [2024-12-06 19:21:20.337330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.491 [2024-12-06 19:21:20.337353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.491 [2024-12-06 19:21:20.337379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.491 [2024-12-06 19:21:20.337401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.491 [2024-12-06 19:21:20.337427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.491 [2024-12-06 19:21:20.337450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.491 [2024-12-06 19:21:20.337474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3960 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.337521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.337557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.337571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.337568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.491 [2024-12-06 19:21:20.337583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.337595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.337599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.491 [2024-12-06 19:21:20.337612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.337625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.337625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.491 [2024-12-06 19:21:20.337638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.337650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with [2024-12-06 19:21:20.337651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:22:35.491 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.491 [2024-12-06 19:21:20.337671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.337702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.491 [2024-12-06 19:21:20.337691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.491 [2024-12-06 19:21:20.337714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.492 [2024-12-06 19:21:20.337768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.492 [2024-12-06 19:21:20.337793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.492 [2024-12-06 19:21:20.337829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b8c0 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-06 19:21:20.337934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:35.492 the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-06 19:21:20.337961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.492 the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.337989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-12-06 19:21:20.338003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:35.492 the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.492 [2024-12-06 19:21:20.338032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.492 [2024-12-06 19:21:20.338073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.492 [2024-12-06 19:21:20.338097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.492 [2024-12-06 19:21:20.338122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.492 [2024-12-06 19:21:20.338145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6e480 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.492 [2024-12-06 19:21:20.338248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with [2024-12-06 19:21:20.338262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:22:35.492 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.492 [2024-12-06 19:21:20.338281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with [2024-12-06 19:21:20.338290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:22:35.492 id:0 cdw10:00000000 cdw11:00000000 00:22:35.492 [2024-12-06 19:21:20.338309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-06 19:21:20.338321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.492 the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-06 19:21:20.338348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:35.492 the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.492 [2024-12-06 19:21:20.338375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-06 19:21:20.338400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:35.492 the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.492 [2024-12-06 19:21:20.338426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad6110 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121f10 is same with the state(6) to be set 00:22:35.492 [2024-12-06 19:21:20.338510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.493 [2024-12-06 19:21:20.338562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.338589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.493 [2024-12-06 19:21:20.338612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.338637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.493 [2024-12-06 19:21:20.338659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.338682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.493 [2024-12-06 19:21:20.338719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.338752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.338815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.493 [2024-12-06 19:21:20.338844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.338868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.493 [2024-12-06 19:21:20.338894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.338919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.493 [2024-12-06 19:21:20.338944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.338969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.493 [2024-12-06 19:21:20.338994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.339015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c560 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.493 [2024-12-06 19:21:20.339126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.339151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.493 [2024-12-06 19:21:20.339176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.339199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.493 [2024-12-06 19:21:20.339222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.339245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.493 [2024-12-06 19:21:20.339268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.339276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63cd0 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6e910 (9): Bad file descriptor 00:22:35.493 [2024-12-06 19:21:20.339359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.339643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.342874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:35.493 [2024-12-06 19:21:20.342925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:35.493 [2024-12-06 19:21:20.342973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:22:35.493 [2024-12-06 19:21:20.343009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6e480 (9): Bad file descriptor 00:22:35.493 [2024-12-06 19:21:20.346141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.493 [2024-12-06 19:21:20.346179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb6e480 with addr=10.0.0.2, port=4420 00:22:35.493 [2024-12-06 19:21:20.346206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6e480 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.346416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.493 [2024-12-06 19:21:20.346448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:22:35.493 [2024-12-06 19:21:20.346487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(6) to be set 00:22:35.493 [2024-12-06 19:21:20.346590] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:35.493 [2024-12-06 19:21:20.346678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-12-06 19:21:20.346770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.346810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-12-06 19:21:20.346838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.346866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-12-06 19:21:20.346893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.346920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-12-06 19:21:20.346946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.346972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-12-06 19:21:20.346998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.347038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-12-06 19:21:20.347064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.347089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-12-06 19:21:20.347116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.347142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-12-06 19:21:20.347174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.493 [2024-12-06 19:21:20.347201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.493 [2024-12-06 19:21:20.347225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.347251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.347275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.347301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.347339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.347365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.347389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.347413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.347437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.347462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.347486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.347511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.347534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.347558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.347582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.347607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.347631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.347655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.347679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.347719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.347754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.347782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.347815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.347846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.347873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.347900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.347926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.347952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.347977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.348001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd73070 is same with the state(6) to be set 00:22:35.494 [2024-12-06 19:21:20.348227] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:35.494 [2024-12-06 19:21:20.348350] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:35.494 [2024-12-06 19:21:20.348751] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:35.494 [2024-12-06 19:21:20.348800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6e480 (9): Bad file descriptor 00:22:35.494 [2024-12-06 19:21:20.348838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:22:35.494 [2024-12-06 19:21:20.348916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.494 [2024-12-06 19:21:20.348944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.348971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.494 [2024-12-06 19:21:20.348994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.349031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.494 [2024-12-06 19:21:20.349053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.349079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.494 [2024-12-06 19:21:20.349101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.349122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc7d40 is same with the state(6) to be set 00:22:35.494 [2024-12-06 19:21:20.349167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff3960 (9): Bad file descriptor 00:22:35.494 [2024-12-06 19:21:20.349245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8b8c0 (9): Bad file descriptor 00:22:35.494 [2024-12-06 19:21:20.349295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad6110 (9): Bad file descriptor 00:22:35.494 [2024-12-06 19:21:20.349343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8c560 (9): Bad file descriptor 00:22:35.494 [2024-12-06 19:21:20.349389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb63cd0 (9): Bad file descriptor 00:22:35.494 [2024-12-06 19:21:20.350757] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:35.494 [2024-12-06 19:21:20.350879] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:35.494 [2024-12-06 19:21:20.351062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:35.494 [2024-12-06 19:21:20.351135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:35.494 [2024-12-06 19:21:20.351160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:35.494 [2024-12-06 19:21:20.351196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:35.494 [2024-12-06 19:21:20.351221] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:35.494 [2024-12-06 19:21:20.351246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:35.494 [2024-12-06 19:21:20.351267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:35.494 [2024-12-06 19:21:20.351286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:35.494 [2024-12-06 19:21:20.351308] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:35.494 [2024-12-06 19:21:20.351379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.351422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.351454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.351480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.351506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.351531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.351557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.351582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.351607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.351633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.351659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.351682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.351735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.351763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.351791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.351817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.351844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.351876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.351905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.351930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.351958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.494 [2024-12-06 19:21:20.351984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.494 [2024-12-06 19:21:20.352025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.352967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.352994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.353960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.353988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.354027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.354057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.354095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.354120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.354144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.354169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.495 [2024-12-06 19:21:20.354193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.495 [2024-12-06 19:21:20.354218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.496 [2024-12-06 19:21:20.354241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.357210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.357662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2122290 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.363956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.496 [2024-12-06 19:21:20.363994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.364025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.496 [2024-12-06 19:21:20.364050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.364089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.496 [2024-12-06 19:21:20.364120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.364149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.496 [2024-12-06 19:21:20.364174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.364202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.496 [2024-12-06 19:21:20.364227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.364255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.496 [2024-12-06 19:21:20.364280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.364306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.496 [2024-12-06 19:21:20.364331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.364358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.496 [2024-12-06 19:21:20.364383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.364411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.496 [2024-12-06 19:21:20.364436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.364465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.496 [2024-12-06 19:21:20.364488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.364518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.496 [2024-12-06 19:21:20.364542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.364570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.496 [2024-12-06 19:21:20.364594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.364620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10936c0 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.366358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:35.496 [2024-12-06 19:21:20.366643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.496 [2024-12-06 19:21:20.366684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb63cd0 with addr=10.0.0.2, port=4420 00:22:35.496 [2024-12-06 19:21:20.366713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63cd0 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.366814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc7d40 (9): Bad file descriptor 00:22:35.496 [2024-12-06 19:21:20.366910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.496 [2024-12-06 19:21:20.366947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.366977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.496 [2024-12-06 19:21:20.367001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.367027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.496 [2024-12-06 19:21:20.367050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.367076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.496 [2024-12-06 19:21:20.367099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.367123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc360 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.367197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb63cd0 (9): Bad file descriptor 00:22:35.496 [2024-12-06 19:21:20.368022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.496 [2024-12-06 19:21:20.368070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb6e910 with addr=10.0.0.2, port=4420 00:22:35.496 [2024-12-06 19:21:20.368098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6e910 is same with the state(6) to be set 00:22:35.496 [2024-12-06 19:21:20.368540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.496 [2024-12-06 19:21:20.368571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.496 [2024-12-06 19:21:20.368605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.368634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.368662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.368688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.368716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.368755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.368784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.368810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.368838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.368863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.368890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.368915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.368949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.368974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.369953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.369982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.370008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.370035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.370061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.370087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.370113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.370141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.370167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.370194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.370220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.370247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.370273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.370306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.370332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.370359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.370386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.370414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.370439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.370467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.370492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.497 [2024-12-06 19:21:20.370519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.497 [2024-12-06 19:21:20.370544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.370572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.370596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.370624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.370647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.370675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.370700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.370736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.370764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.370793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.370818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.370847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.370872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.370902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.370926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.370953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.370983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.371013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.371038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.371066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.371090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.371119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.371144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.371171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.371195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.371222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.371247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.371274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.371299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.371326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.371351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.371379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.371406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.371433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.371459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.371485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.371512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.371539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.371565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.371593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.371619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.371658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.371684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.371708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf73a00 is same with the state(6) to be set 00:22:35.498 [2024-12-06 19:21:20.373203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.373235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.373271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.373297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.373325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.373350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.373378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.373403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.373431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.373456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.373483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.373508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.373536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.373561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.373589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.373615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.373645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.373670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.373700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.373732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.373763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.373788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.373823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.373849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.373876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.373900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.373929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.373953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.373983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.374008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.374036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.374060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.374089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.374114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.498 [2024-12-06 19:21:20.374142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.498 [2024-12-06 19:21:20.374167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.374197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.374221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.374251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.374276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.374305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.374329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.374357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.374382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.374410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.374434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.374463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.374492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.374523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.374548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.374577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.374601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.374630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.374655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.374684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.374708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.374745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.374771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.374800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.374825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.374854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.374879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.374907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.374932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.374961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.374984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.375951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.375976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.376003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.376027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.376055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.376078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.376107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.376131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.376160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.376185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.376215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.376240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.376269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.499 [2024-12-06 19:21:20.376294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.499 [2024-12-06 19:21:20.376323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.376347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.376376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.376402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.376431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.376455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.376484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.376508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.376538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.376570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.376597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.376621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.376648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.376672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.376699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf74cc0 is same with the state(6) to be set 00:22:35.500 [2024-12-06 19:21:20.378217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.378248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.378282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.378308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.378337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.378361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.378389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.378414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.378442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.378467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.378495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.378521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.378548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.378574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.378600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.378625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.378652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.378677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.378704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.378746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.378774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.378802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.378829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.378855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.378882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.378908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.378934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.378960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.378987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.500 [2024-12-06 19:21:20.379926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.500 [2024-12-06 19:21:20.379952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.379979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.380955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.380982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.381007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.381034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.381060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.381087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.381113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.381139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.381164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.381191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.381217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.381243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.381269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.381295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.381321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.381347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.381373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.381400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.381438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.381468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.381493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.381520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.381545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.381571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.381597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.381624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.381651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.381676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf75f80 is same with the state(6) to be set 00:22:35.501 [2024-12-06 19:21:20.383354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.383388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.383427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.383454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.383483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.501 [2024-12-06 19:21:20.383507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.501 [2024-12-06 19:21:20.383537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.383560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.383589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.383613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.383641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.383665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.383695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.383719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.383760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.383801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.383831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.383855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.383884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.383908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.383936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.383959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.383989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.384958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.384983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.385010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.385036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.385064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.385090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.385117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.385150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.385178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.385204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.385232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.385257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.385285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.385311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.385339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.385365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.385394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.385422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.385449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.385476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.385503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.385528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.385554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.385579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.385606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.385639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.385666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.502 [2024-12-06 19:21:20.385692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.502 [2024-12-06 19:21:20.385730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.385768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.385795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.385821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.385853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.385880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.385908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.385933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.385962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.385987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.386023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.386047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.386082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.386105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.386135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.386160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.386189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.386219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.386248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.386272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.386300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.386325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.386354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.386377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.386406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.386431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.386460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.386485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.386514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.386550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.386579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.386605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.386631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.386656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.386685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.386716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.386756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.386789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.386817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.386841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.386869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.386894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.386920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108cc00 is same with the state(6) to be set 00:22:35.503 [2024-12-06 19:21:20.388371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:35.503 [2024-12-06 19:21:20.388415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:35.503 [2024-12-06 19:21:20.388458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:35.503 [2024-12-06 19:21:20.388495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:35.503 [2024-12-06 19:21:20.388531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:35.503 [2024-12-06 19:21:20.388650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6e910 (9): Bad file descriptor 00:22:35.503 [2024-12-06 19:21:20.388686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:35.503 [2024-12-06 19:21:20.388710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:35.503 [2024-12-06 19:21:20.388748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:35.503 [2024-12-06 19:21:20.388778] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:35.503 [2024-12-06 19:21:20.388868] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:35.503 [2024-12-06 19:21:20.388909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfcc360 (9): Bad file descriptor 00:22:35.503 [2024-12-06 19:21:20.388982] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:35.503 [2024-12-06 19:21:20.389277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:35.503 [2024-12-06 19:21:20.389578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.503 [2024-12-06 19:21:20.389619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:22:35.503 [2024-12-06 19:21:20.389646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(6) to be set 00:22:35.503 [2024-12-06 19:21:20.389826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.503 [2024-12-06 19:21:20.389861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb6e480 with addr=10.0.0.2, port=4420 00:22:35.503 [2024-12-06 19:21:20.389888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6e480 is same with the state(6) to be set 00:22:35.503 [2024-12-06 19:21:20.390008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.503 [2024-12-06 19:21:20.390042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8c560 with addr=10.0.0.2, port=4420 00:22:35.503 [2024-12-06 19:21:20.390067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c560 is same with the state(6) to be set 00:22:35.503 [2024-12-06 19:21:20.390257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.503 [2024-12-06 19:21:20.390292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad6110 with addr=10.0.0.2, port=4420 00:22:35.503 [2024-12-06 19:21:20.390318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad6110 is same with the state(6) to be set 00:22:35.503 [2024-12-06 19:21:20.390441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.503 [2024-12-06 19:21:20.390473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8b8c0 with addr=10.0.0.2, port=4420 00:22:35.503 [2024-12-06 19:21:20.390499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b8c0 is same with the state(6) to be set 00:22:35.503 [2024-12-06 19:21:20.390523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:35.503 [2024-12-06 19:21:20.390546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:35.503 [2024-12-06 19:21:20.390570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:35.503 [2024-12-06 19:21:20.390593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:35.503 [2024-12-06 19:21:20.391711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.391754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.391791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.391818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.391846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.391872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.503 [2024-12-06 19:21:20.391899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.503 [2024-12-06 19:21:20.391925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.391958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.391985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.392962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.392991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.393970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.393995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.394030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.504 [2024-12-06 19:21:20.394056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.504 [2024-12-06 19:21:20.394083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.394107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.394134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.394159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.394187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.394212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.394239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.394263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.394293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.394318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.394346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.394370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.394399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.394423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.394453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.394478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.394506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.394531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.394559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.394583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.394613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.394638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.394667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.394697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.394737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.394765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.394793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.394818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.394855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.394881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.394908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.394936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.394965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.394992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.395020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.395047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.395075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.395102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.395130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.395156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.395183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.395209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.395235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebcdb0 is same with the state(6) to be set 00:22:35.505 [2024-12-06 19:21:20.397116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:35.505 [2024-12-06 19:21:20.397169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:35.505 [2024-12-06 19:21:20.397463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.505 [2024-12-06 19:21:20.397504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff3960 with addr=10.0.0.2, port=4420 00:22:35.505 [2024-12-06 19:21:20.397534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3960 is same with the state(6) to be set 00:22:35.505 [2024-12-06 19:21:20.397571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:22:35.505 [2024-12-06 19:21:20.397614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6e480 (9): Bad file descriptor 00:22:35.505 [2024-12-06 19:21:20.397650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8c560 (9): Bad file descriptor 00:22:35.505 [2024-12-06 19:21:20.397683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad6110 (9): Bad file descriptor 00:22:35.505 [2024-12-06 19:21:20.397718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8b8c0 (9): Bad file descriptor 00:22:35.505 [2024-12-06 19:21:20.397925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.397957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.397992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.398018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.398048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.398074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.398103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.398130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.398158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.398183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.398211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.398237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.398267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.398294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.398321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.398348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.398376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.398402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.398429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.398455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.505 [2024-12-06 19:21:20.398483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.505 [2024-12-06 19:21:20.398515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.398545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.398571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.398599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.398626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.398655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.398680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.398709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.398754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.398786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.398811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.398840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.398866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.398895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.398920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.398949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.398974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.399971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.399999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.400024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.400052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.400078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.400104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.400130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.400158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.400184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.400211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.400235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.400262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.400288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.400315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.400341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.400367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.400393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.400421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.400447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.400475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.400501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.400529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.400556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.400589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.506 [2024-12-06 19:21:20.400616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.506 [2024-12-06 19:21:20.400643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-12-06 19:21:20.400668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.507 [2024-12-06 19:21:20.400696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-12-06 19:21:20.400730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.507 [2024-12-06 19:21:20.400762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-12-06 19:21:20.400786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.507 [2024-12-06 19:21:20.400813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-12-06 19:21:20.400840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.507 [2024-12-06 19:21:20.400868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-12-06 19:21:20.400894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.507 [2024-12-06 19:21:20.400921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-12-06 19:21:20.400947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.507 [2024-12-06 19:21:20.400974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-12-06 19:21:20.401000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.507 [2024-12-06 19:21:20.401027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-12-06 19:21:20.401053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.507 [2024-12-06 19:21:20.401080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-12-06 19:21:20.401106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.507 [2024-12-06 19:21:20.401134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-12-06 19:21:20.401160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.507 [2024-12-06 19:21:20.401189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-12-06 19:21:20.401215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.507 [2024-12-06 19:21:20.401242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-12-06 19:21:20.401273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.507 [2024-12-06 19:21:20.401301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-12-06 19:21:20.401327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.507 [2024-12-06 19:21:20.401355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-12-06 19:21:20.401380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.507 [2024-12-06 19:21:20.401408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.507 [2024-12-06 19:21:20.401435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.507 [2024-12-06 19:21:20.401461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108b970 is same with the state(6) to be set 00:22:35.507 [2024-12-06 19:21:20.401743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.507 [2024-12-06 19:21:20.401781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb63cd0 with addr=10.0.0.2, port=4420 00:22:35.507 [2024-12-06 19:21:20.401809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63cd0 is same with the state(6) to be set 00:22:35.507 [2024-12-06 19:21:20.401949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.507 [2024-12-06 19:21:20.401983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc7d40 with addr=10.0.0.2, port=4420 00:22:35.507 [2024-12-06 19:21:20.402009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc7d40 is same with the state(6) to be set 00:22:35.507 [2024-12-06 19:21:20.402039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff3960 (9): Bad file descriptor 00:22:35.507 [2024-12-06 19:21:20.402071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:35.507 [2024-12-06 19:21:20.402093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:35.507 [2024-12-06 19:21:20.402120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:35.507 [2024-12-06 19:21:20.402145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:35.507 [2024-12-06 19:21:20.402175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:35.507 [2024-12-06 19:21:20.402197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:35.507 [2024-12-06 19:21:20.402220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:35.507 [2024-12-06 19:21:20.402241] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:35.507 [2024-12-06 19:21:20.402267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:35.507 [2024-12-06 19:21:20.402288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:35.507 [2024-12-06 19:21:20.402309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:35.507 [2024-12-06 19:21:20.402332] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:35.507 [2024-12-06 19:21:20.402356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:35.507 [2024-12-06 19:21:20.402384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:35.507 [2024-12-06 19:21:20.402408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:35.507 [2024-12-06 19:21:20.402431] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:35.507 [2024-12-06 19:21:20.402455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:35.507 [2024-12-06 19:21:20.402477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:35.507 [2024-12-06 19:21:20.402501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:35.507 [2024-12-06 19:21:20.402521] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:35.507 [2024-12-06 19:21:20.402585] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:22:35.507 [2024-12-06 19:21:20.402672] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:35.507 [2024-12-06 19:21:20.404538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:35.507 task offset: 26112 on job bdev=Nvme2n1 fails 00:22:35.507 00:22:35.507 Latency(us) 00:22:35.507 [2024-12-06T18:21:20.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.507 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.507 Job: Nvme1n1 ended in about 0.94 seconds with error 00:22:35.507 Verification LBA range: start 0x0 length 0x400 00:22:35.507 Nvme1n1 : 0.94 136.19 8.51 68.09 0.00 309827.51 41554.68 257872.02 00:22:35.507 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.507 Job: Nvme2n1 ended in about 0.91 seconds with error 00:22:35.507 Verification LBA range: start 0x0 length 0x400 00:22:35.507 Nvme2n1 : 0.91 212.08 13.25 69.96 0.00 219524.06 11942.12 250104.79 00:22:35.507 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.507 Job: Nvme3n1 ended in about 0.92 seconds with error 00:22:35.507 Verification LBA range: start 0x0 length 0x400 00:22:35.507 Nvme3n1 : 0.92 206.60 12.91 24.88 0.00 261088.26 18932.62 281173.71 00:22:35.507 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.507 Job: Nvme4n1 ended in about 0.92 seconds with error 00:22:35.507 Verification LBA range: start 0x0 length 0x400 00:22:35.507 Nvme4n1 : 0.92 209.56 13.10 69.85 0.00 212046.79 8592.50 278066.82 00:22:35.507 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.507 Job: Nvme5n1 ended in about 0.95 seconds with error 00:22:35.507 Verification LBA range: start 0x0 length 0x400 00:22:35.507 Nvme5n1 : 0.95 145.74 9.11 61.25 0.00 280005.66 27185.30 264085.81 00:22:35.507 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.507 Job: Nvme6n1 ended in about 0.95 seconds with error 00:22:35.507 Verification LBA range: start 0x0 length 0x400 00:22:35.507 Nvme6n1 : 0.95 138.67 8.67 67.23 0.00 276330.59 34564.17 251658.24 00:22:35.507 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.507 Job: Nvme7n1 ended in about 0.96 seconds with error 00:22:35.507 Verification LBA range: start 0x0 length 0x400 00:22:35.507 Nvme7n1 : 0.96 133.77 8.36 66.88 0.00 277375.56 18932.62 271853.04 00:22:35.507 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.507 Job: Nvme8n1 ended in about 0.97 seconds with error 00:22:35.507 Verification LBA range: start 0x0 length 0x400 00:22:35.507 Nvme8n1 : 0.97 131.90 8.24 65.95 0.00 275756.63 18835.53 278066.82 00:22:35.507 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.507 Job: Nvme9n1 ended in about 0.98 seconds with error 00:22:35.507 Verification LBA range: start 0x0 length 0x400 00:22:35.507 Nvme9n1 : 0.98 130.84 8.18 65.42 0.00 272467.94 22524.97 273406.48 00:22:35.507 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:35.507 Job: Nvme10n1 ended in about 0.96 seconds with error 00:22:35.508 Verification LBA range: start 0x0 length 0x400 00:22:35.508 Nvme10n1 : 0.96 133.04 8.31 66.52 0.00 261120.51 19612.25 290494.39 00:22:35.508 [2024-12-06T18:21:20.557Z] =================================================================================================================== 00:22:35.508 [2024-12-06T18:21:20.557Z] Total : 1578.38 98.65 626.05 0.00 261525.59 8592.50 290494.39 00:22:35.508 [2024-12-06 19:21:20.442658] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:35.508 [2024-12-06 19:21:20.442833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb63cd0 (9): Bad file descriptor 00:22:35.508 [2024-12-06 19:21:20.442888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc7d40 (9): Bad file descriptor 00:22:35.508 [2024-12-06 19:21:20.442921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:35.508 [2024-12-06 19:21:20.442942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:35.508 [2024-12-06 19:21:20.442968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:35.508 [2024-12-06 19:21:20.442994] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:35.508 [2024-12-06 19:21:20.443145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:35.508 [2024-12-06 19:21:20.443187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:35.508 [2024-12-06 19:21:20.443223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:35.508 [2024-12-06 19:21:20.443255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:35.508 [2024-12-06 19:21:20.443285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:35.508 [2024-12-06 19:21:20.443315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:35.508 [2024-12-06 19:21:20.443761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.508 [2024-12-06 19:21:20.443803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfcc360 with addr=10.0.0.2, port=4420 00:22:35.508 [2024-12-06 19:21:20.443836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcc360 is same with the state(6) to be set 00:22:35.508 [2024-12-06 19:21:20.443861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:35.508 [2024-12-06 19:21:20.443886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:35.508 [2024-12-06 19:21:20.443908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:35.508 [2024-12-06 19:21:20.443930] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:35.508 [2024-12-06 19:21:20.443956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:35.508 [2024-12-06 19:21:20.443977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:35.508 [2024-12-06 19:21:20.444002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:35.508 [2024-12-06 19:21:20.444025] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:35.508 [2024-12-06 19:21:20.444735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.508 [2024-12-06 19:21:20.444779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb6e910 with addr=10.0.0.2, port=4420 00:22:35.508 [2024-12-06 19:21:20.444835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6e910 is same with the state(6) to be set 00:22:35.508 [2024-12-06 19:21:20.444981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.508 [2024-12-06 19:21:20.445011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8b8c0 with addr=10.0.0.2, port=4420 00:22:35.508 [2024-12-06 19:21:20.445037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b8c0 is same with the state(6) to be set 00:22:35.508 [2024-12-06 19:21:20.445259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.508 [2024-12-06 19:21:20.445289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad6110 with addr=10.0.0.2, port=4420 00:22:35.508 [2024-12-06 19:21:20.445315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad6110 is same with the state(6) to be set 00:22:35.508 [2024-12-06 19:21:20.445468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.508 [2024-12-06 19:21:20.445500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8c560 with addr=10.0.0.2, port=4420 00:22:35.508 [2024-12-06 19:21:20.445526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c560 is same with the state(6) to be set 00:22:35.508 [2024-12-06 19:21:20.445676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.508 [2024-12-06 19:21:20.445705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb6e480 with addr=10.0.0.2, port=4420 00:22:35.508 [2024-12-06 19:21:20.445740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6e480 is same with the state(6) to be set 00:22:35.508 [2024-12-06 19:21:20.445861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.508 [2024-12-06 19:21:20.445894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bac0 with addr=10.0.0.2, port=4420 00:22:35.508 [2024-12-06 19:21:20.445920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bac0 is same with the state(6) to be set 00:22:35.508 [2024-12-06 19:21:20.445948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfcc360 (9): Bad file descriptor 00:22:35.508 [2024-12-06 19:21:20.446321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:35.508 [2024-12-06 19:21:20.446355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:35.508 [2024-12-06 19:21:20.446386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:35.508 [2024-12-06 19:21:20.446449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6e910 (9): Bad file descriptor 00:22:35.508 [2024-12-06 19:21:20.446485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8b8c0 (9): Bad file descriptor 00:22:35.508 [2024-12-06 19:21:20.446517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad6110 (9): Bad file descriptor 00:22:35.508 [2024-12-06 19:21:20.446548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8c560 (9): Bad file descriptor 00:22:35.508 [2024-12-06 19:21:20.446580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6e480 (9): Bad file descriptor 00:22:35.508 [2024-12-06 19:21:20.446611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8bac0 (9): Bad file descriptor 00:22:35.508 [2024-12-06 19:21:20.446639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:35.508 [2024-12-06 19:21:20.446660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:35.508 [2024-12-06 19:21:20.446689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:35.508 [2024-12-06 19:21:20.446711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:35.508 [2024-12-06 19:21:20.447006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.508 [2024-12-06 19:21:20.447041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc7d40 with addr=10.0.0.2, port=4420 00:22:35.508 [2024-12-06 19:21:20.447066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc7d40 is same with the state(6) to be set 00:22:35.508 [2024-12-06 19:21:20.447290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.508 [2024-12-06 19:21:20.447319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb63cd0 with addr=10.0.0.2, port=4420 00:22:35.508 [2024-12-06 19:21:20.447346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb63cd0 is same with the state(6) to be set 00:22:35.508 [2024-12-06 19:21:20.447569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.508 [2024-12-06 19:21:20.447598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff3960 with addr=10.0.0.2, port=4420 00:22:35.508 [2024-12-06 19:21:20.447625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3960 is same with the state(6) to be set 00:22:35.508 [2024-12-06 19:21:20.447649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:35.508 [2024-12-06 19:21:20.447671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:35.508 [2024-12-06 19:21:20.447693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:35.508 [2024-12-06 19:21:20.447714] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:35.508 [2024-12-06 19:21:20.447753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:35.508 [2024-12-06 19:21:20.447775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:35.508 [2024-12-06 19:21:20.447798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:35.508 [2024-12-06 19:21:20.447820] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:35.508 [2024-12-06 19:21:20.447844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:35.508 [2024-12-06 19:21:20.447867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:35.508 [2024-12-06 19:21:20.447888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:35.508 [2024-12-06 19:21:20.447910] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:35.508 [2024-12-06 19:21:20.447935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:35.508 [2024-12-06 19:21:20.447956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:35.508 [2024-12-06 19:21:20.447979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:35.508 [2024-12-06 19:21:20.447999] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:35.508 [2024-12-06 19:21:20.448023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:35.508 [2024-12-06 19:21:20.448044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:35.508 [2024-12-06 19:21:20.448070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:35.508 [2024-12-06 19:21:20.448093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:35.508 [2024-12-06 19:21:20.448116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:35.508 [2024-12-06 19:21:20.448138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:35.509 [2024-12-06 19:21:20.448159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:35.509 [2024-12-06 19:21:20.448179] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:35.509 [2024-12-06 19:21:20.448240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc7d40 (9): Bad file descriptor 00:22:35.509 [2024-12-06 19:21:20.448275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb63cd0 (9): Bad file descriptor 00:22:35.509 [2024-12-06 19:21:20.448308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff3960 (9): Bad file descriptor 00:22:35.509 [2024-12-06 19:21:20.448389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:35.509 [2024-12-06 19:21:20.448417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:35.509 [2024-12-06 19:21:20.448440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:35.509 [2024-12-06 19:21:20.448461] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:35.509 [2024-12-06 19:21:20.448486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:35.509 [2024-12-06 19:21:20.448508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:35.509 [2024-12-06 19:21:20.448531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:35.509 [2024-12-06 19:21:20.448553] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:35.509 [2024-12-06 19:21:20.448575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:35.509 [2024-12-06 19:21:20.448598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:35.509 [2024-12-06 19:21:20.448617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:35.509 [2024-12-06 19:21:20.448639] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:36.078 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:37.017 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 263867 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 263867 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 263867 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:37.018 rmmod nvme_tcp 00:22:37.018 rmmod nvme_fabrics 00:22:37.018 rmmod nvme_keyring 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 263804 ']' 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 263804 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 263804 ']' 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 263804 00:22:37.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (263804) - No such process 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 263804 is not found' 00:22:37.018 Process with pid 263804 is not found 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.018 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:39.556 00:22:39.556 real 0m7.735s 00:22:39.556 user 0m19.854s 00:22:39.556 sys 0m1.502s 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:39.556 ************************************ 00:22:39.556 END TEST nvmf_shutdown_tc3 00:22:39.556 ************************************ 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:39.556 ************************************ 00:22:39.556 START TEST nvmf_shutdown_tc4 00:22:39.556 ************************************ 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.556 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:39.557 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:39.557 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:39.557 Found net devices under 0000:84:00.0: cvl_0_0 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:39.557 Found net devices under 0000:84:00.1: cvl_0_1 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:39.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:22:39.557 00:22:39.557 --- 10.0.0.2 ping statistics --- 00:22:39.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.557 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:22:39.557 00:22:39.557 --- 10.0.0.1 ping statistics --- 00:22:39.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.557 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=264793 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 264793 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 264793 ']' 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 [2024-12-06 19:21:24.305224] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:22:39.557 [2024-12-06 19:21:24.305313] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.557 [2024-12-06 19:21:24.381678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.557 [2024-12-06 19:21:24.439033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.557 [2024-12-06 19:21:24.439097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.557 [2024-12-06 19:21:24.439125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.557 [2024-12-06 19:21:24.439139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.557 [2024-12-06 19:21:24.439156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.557 [2024-12-06 19:21:24.440675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.557 [2024-12-06 19:21:24.440804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.557 [2024-12-06 19:21:24.440869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:39.557 [2024-12-06 19:21:24.440873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:39.557 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:39.558 [2024-12-06 19:21:24.578744] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.558 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:39.814 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.814 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:39.814 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.814 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:39.814 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.814 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:39.814 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:39.814 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.814 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:39.814 Malloc1 00:22:39.814 [2024-12-06 19:21:24.663445] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.814 Malloc2 00:22:39.814 Malloc3 00:22:39.814 Malloc4 00:22:39.814 Malloc5 00:22:40.071 Malloc6 00:22:40.071 Malloc7 00:22:40.071 Malloc8 00:22:40.071 Malloc9 00:22:40.071 Malloc10 00:22:40.071 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.071 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:40.071 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.071 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:40.071 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=264962 00:22:40.071 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:40.071 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:40.339 [2024-12-06 19:21:25.169907] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:45.614 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:45.614 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 264793 00:22:45.614 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 264793 ']' 00:22:45.614 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 264793 00:22:45.614 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:45.614 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:45.614 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 264793 00:22:45.614 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:45.614 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:45.614 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 264793' 00:22:45.614 killing process with pid 264793 00:22:45.614 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 264793 00:22:45.614 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 264793 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 [2024-12-06 19:21:30.162617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 [2024-12-06 19:21:30.162917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd080 is same with the state(6) to be set 00:22:45.614 starting I/O failed: -6 00:22:45.614 [2024-12-06 19:21:30.162964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd080 is same with the state(6) to be set 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 [2024-12-06 19:21:30.162980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd080 is same with the state(6) to be set 00:22:45.614 starting I/O failed: -6 00:22:45.614 [2024-12-06 19:21:30.162993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd080 is same with the state(6) to be set 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 [2024-12-06 19:21:30.163006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd080 is same with the state(6) to be set 00:22:45.614 [2024-12-06 19:21:30.163020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fd080 is same with the state(6) to be set 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 Write completed with error (sct=0, sc=8) 00:22:45.614 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 [2024-12-06 19:21:30.164127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 [2024-12-06 19:21:30.165653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.615 starting I/O failed: -6 00:22:45.615 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 [2024-12-06 19:21:30.167719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:45.616 NVMe io qpair process completion error 00:22:45.616 [2024-12-06 19:21:30.168102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5540 is same with the state(6) to be set 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 [2024-12-06 19:21:30.168131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5540 is same with the state(6) to be set 00:22:45.616 [2024-12-06 19:21:30.168148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5540 is same with the state(6) to be set 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 [2024-12-06 19:21:30.168160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e5540 is same with the state(6) to be set 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 [2024-12-06 19:21:30.168488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68c870 is same with the state(6) to be set 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 [2024-12-06 19:21:30.168538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68c870 is same with the state(6) to be set 00:22:45.616 [2024-12-06 19:21:30.168560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68c870 is same with the state(6) to be set 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 [2024-12-06 19:21:30.168574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68c870 is same with the state(6) to be set 00:22:45.616 [2024-12-06 19:21:30.168587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68c870 is same with the state(6) to be set 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 [2024-12-06 19:21:30.168600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68c870 is same with tstarting I/O failed: -6 00:22:45.616 he state(6) to be set 00:22:45.616 [2024-12-06 19:21:30.168615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68c870 is same with the state(6) to be set 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 [2024-12-06 19:21:30.168627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68c870 is same with the state(6) to be set 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 starting I/O failed: -6 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.616 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 [2024-12-06 19:21:30.169166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:45.617 starting I/O failed: -6 00:22:45.617 starting I/O failed: -6 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 [2024-12-06 19:21:30.170418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:45.617 starting I/O failed: -6 00:22:45.617 starting I/O failed: -6 00:22:45.617 starting I/O failed: -6 00:22:45.617 starting I/O failed: -6 00:22:45.617 starting I/O failed: -6 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 [2024-12-06 19:21:30.172049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.617 starting I/O failed: -6 00:22:45.617 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 [2024-12-06 19:21:30.174650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:45.618 NVMe io qpair process completion error 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 [2024-12-06 19:21:30.179282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e6d90 is same with the state(6) to be set 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 starting I/O failed: -6 00:22:45.618 [2024-12-06 19:21:30.179323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e6d90 is same with the state(6) to be set 00:22:45.618 [2024-12-06 19:21:30.179338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e6d90 is same with the state(6) to be set 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.618 [2024-12-06 19:21:30.179351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e6d90 is same with the state(6) to be set 00:22:45.618 [2024-12-06 19:21:30.179363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e6d90 is same with the state(6) to be set 00:22:45.618 Write completed with error (sct=0, sc=8) 00:22:45.619 [2024-12-06 19:21:30.179374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e6d90 is same with the state(6) to be set 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 [2024-12-06 19:21:30.179857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7280 is same with tWrite completed with error (sct=0, sc=8) 00:22:45.619 he state(6) to be set 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 [2024-12-06 19:21:30.179901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7280 is same with the state(6) to be set 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 [2024-12-06 19:21:30.179918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7280 is same with the state(6) to be set 00:22:45.619 [2024-12-06 19:21:30.179932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7280 is same with the state(6) to be set 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 [2024-12-06 19:21:30.179953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7280 is same with tstarting I/O failed: -6 00:22:45.619 he state(6) to be set 00:22:45.619 [2024-12-06 19:21:30.179968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7280 is same with the state(6) to be set 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 [2024-12-06 19:21:30.180151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:45.619 [2024-12-06 19:21:30.180204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7750 is same with the state(6) to be set 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 [2024-12-06 19:21:30.180453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e68c0 is same with the state(6) to be set 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 [2024-12-06 19:21:30.180494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e68c0 is same with the state(6) to be set 00:22:45.619 starting I/O failed: -6 00:22:45.619 [2024-12-06 19:21:30.180510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e68c0 is same with the state(6) to be set 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 [2024-12-06 19:21:30.180523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e68c0 is same with the state(6) to be set 00:22:45.619 [2024-12-06 19:21:30.180537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e68c0 is same with the state(6) to be set 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 [2024-12-06 19:21:30.180549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e68c0 is same with the state(6) to be set 00:22:45.619 [2024-12-06 19:21:30.180562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e68c0 is same with the state(6) to be set 00:22:45.619 [2024-12-06 19:21:30.180576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e68c0 is same with the state(6) to be set 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 [2024-12-06 19:21:30.180588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e68c0 is same with the state(6) to be set 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 [2024-12-06 19:21:30.181577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:45.619 Write completed with error (sct=0, sc=8) 00:22:45.619 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 [2024-12-06 19:21:30.182929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.620 starting I/O failed: -6 00:22:45.620 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 [2024-12-06 19:21:30.184946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:45.621 NVMe io qpair process completion error 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 [2024-12-06 19:21:30.186394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 Write completed with error (sct=0, sc=8) 00:22:45.621 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 [2024-12-06 19:21:30.187654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 [2024-12-06 19:21:30.188970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.622 Write completed with error (sct=0, sc=8) 00:22:45.622 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 [2024-12-06 19:21:30.191563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:45.623 NVMe io qpair process completion error 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.623 starting I/O failed: -6 00:22:45.623 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 [2024-12-06 19:21:30.192850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:45.624 starting I/O failed: -6 00:22:45.624 starting I/O failed: -6 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 [2024-12-06 19:21:30.194077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 Write completed with error (sct=0, sc=8) 00:22:45.624 starting I/O failed: -6 00:22:45.624 [2024-12-06 19:21:30.195410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 [2024-12-06 19:21:30.198715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:45.625 NVMe io qpair process completion error 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 starting I/O failed: -6 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.625 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 [2024-12-06 19:21:30.200198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 [2024-12-06 19:21:30.201349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.626 starting I/O failed: -6 00:22:45.626 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 [2024-12-06 19:21:30.202657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 Write completed with error (sct=0, sc=8) 00:22:45.627 starting I/O failed: -6 00:22:45.627 [2024-12-06 19:21:30.206518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:45.627 NVMe io qpair process completion error 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 [2024-12-06 19:21:30.207697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 [2024-12-06 19:21:30.208834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.628 Write completed with error (sct=0, sc=8) 00:22:45.628 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 [2024-12-06 19:21:30.210197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.629 Write completed with error (sct=0, sc=8) 00:22:45.629 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 [2024-12-06 19:21:30.212556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:45.630 NVMe io qpair process completion error 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 [2024-12-06 19:21:30.213961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 starting I/O failed: -6 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.630 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 [2024-12-06 19:21:30.215136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 [2024-12-06 19:21:30.216465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.631 starting I/O failed: -6 00:22:45.631 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 [2024-12-06 19:21:30.218561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:45.632 NVMe io qpair process completion error 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 [2024-12-06 19:21:30.219950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 starting I/O failed: -6 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.632 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 [2024-12-06 19:21:30.221192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 [2024-12-06 19:21:30.222439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.633 Write completed with error (sct=0, sc=8) 00:22:45.633 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 [2024-12-06 19:21:30.226902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:45.634 NVMe io qpair process completion error 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 starting I/O failed: -6 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.634 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 [2024-12-06 19:21:30.230254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.635 Write completed with error (sct=0, sc=8) 00:22:45.635 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 Write completed with error (sct=0, sc=8) 00:22:45.636 starting I/O failed: -6 00:22:45.636 [2024-12-06 19:21:30.233297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:45.636 NVMe io qpair process completion error 00:22:45.636 Initializing NVMe Controllers 00:22:45.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:45.636 Controller IO queue size 128, less than required. 00:22:45.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:45.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:45.636 Controller IO queue size 128, less than required. 00:22:45.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:45.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:45.636 Controller IO queue size 128, less than required. 00:22:45.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:45.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:45.636 Controller IO queue size 128, less than required. 00:22:45.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:45.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:45.636 Controller IO queue size 128, less than required. 00:22:45.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:45.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:45.636 Controller IO queue size 128, less than required. 00:22:45.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:45.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:45.636 Controller IO queue size 128, less than required. 00:22:45.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:45.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:45.636 Controller IO queue size 128, less than required. 00:22:45.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:45.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:45.636 Controller IO queue size 128, less than required. 00:22:45.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:45.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:45.636 Controller IO queue size 128, less than required. 00:22:45.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:45.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:45.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:45.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:45.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:45.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:45.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:45.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:45.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:45.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:45.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:45.637 Initialization complete. Launching workers. 00:22:45.637 ======================================================== 00:22:45.637 Latency(us) 00:22:45.637 Device Information : IOPS MiB/s Average min max 00:22:45.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1593.38 68.47 80343.55 1208.55 134125.67 00:22:45.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1690.01 72.62 75773.01 897.71 152577.20 00:22:45.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1691.07 72.66 74734.55 1225.09 141977.15 00:22:45.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1685.13 72.41 75026.49 883.70 130993.92 00:22:45.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1672.17 71.85 76474.05 867.48 135065.18 00:22:45.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1629.27 70.01 77618.75 1177.90 139244.90 00:22:45.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1624.39 69.80 77873.78 1250.91 138970.88 00:22:45.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1691.07 72.66 74832.15 905.51 138348.78 00:22:45.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1692.56 72.73 74803.13 942.55 137734.57 00:22:45.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1674.93 71.97 75630.23 918.25 137733.71 00:22:45.637 ======================================================== 00:22:45.637 Total : 16643.99 715.17 76277.02 867.48 152577.20 00:22:45.637 00:22:45.637 [2024-12-06 19:21:30.237518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2d9e0 is same with the state(6) to be set 00:22:45.637 [2024-12-06 19:21:30.237643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2e920 is same with the state(6) to be set 00:22:45.637 [2024-12-06 19:21:30.237744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2f720 is same with the state(6) to be set 00:22:45.637 [2024-12-06 19:21:30.237841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2d6b0 is same with the state(6) to be set 00:22:45.637 [2024-12-06 19:21:30.237920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2ec50 is same with the state(6) to be set 00:22:45.637 [2024-12-06 19:21:30.237999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2e5f0 is same with the state(6) to be set 00:22:45.637 [2024-12-06 19:21:30.238077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2fae0 is same with the state(6) to be set 00:22:45.637 [2024-12-06 19:21:30.238163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2e2c0 is same with the state(6) to be set 00:22:45.637 [2024-12-06 19:21:30.238244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2f900 is same with the state(6) to be set 00:22:45.637 [2024-12-06 19:21:30.238325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2dd10 is same with the state(6) to be set 00:22:45.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:45.637 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 264962 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 264962 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 264962 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.017 rmmod nvme_tcp 00:22:47.017 rmmod nvme_fabrics 00:22:47.017 rmmod nvme_keyring 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 264793 ']' 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 264793 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 264793 ']' 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 264793 00:22:47.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (264793) - No such process 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 264793 is not found' 00:22:47.017 Process with pid 264793 is not found 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.017 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.923 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:48.923 00:22:48.923 real 0m9.687s 00:22:48.923 user 0m23.916s 00:22:48.923 sys 0m6.122s 00:22:48.923 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.923 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:48.923 ************************************ 00:22:48.923 END TEST nvmf_shutdown_tc4 00:22:48.923 ************************************ 00:22:48.923 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:48.923 00:22:48.923 real 0m37.517s 00:22:48.923 user 1m42.363s 00:22:48.923 sys 0m12.784s 00:22:48.923 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.923 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:48.923 ************************************ 00:22:48.923 END TEST nvmf_shutdown 00:22:48.924 ************************************ 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:48.924 ************************************ 00:22:48.924 START TEST nvmf_nsid 00:22:48.924 ************************************ 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:48.924 * Looking for test storage... 00:22:48.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:48.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.924 --rc genhtml_branch_coverage=1 00:22:48.924 --rc genhtml_function_coverage=1 00:22:48.924 --rc genhtml_legend=1 00:22:48.924 --rc geninfo_all_blocks=1 00:22:48.924 --rc geninfo_unexecuted_blocks=1 00:22:48.924 00:22:48.924 ' 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:48.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.924 --rc genhtml_branch_coverage=1 00:22:48.924 --rc genhtml_function_coverage=1 00:22:48.924 --rc genhtml_legend=1 00:22:48.924 --rc geninfo_all_blocks=1 00:22:48.924 --rc geninfo_unexecuted_blocks=1 00:22:48.924 00:22:48.924 ' 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:48.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.924 --rc genhtml_branch_coverage=1 00:22:48.924 --rc genhtml_function_coverage=1 00:22:48.924 --rc genhtml_legend=1 00:22:48.924 --rc geninfo_all_blocks=1 00:22:48.924 --rc geninfo_unexecuted_blocks=1 00:22:48.924 00:22:48.924 ' 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:48.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.924 --rc genhtml_branch_coverage=1 00:22:48.924 --rc genhtml_function_coverage=1 00:22:48.924 --rc genhtml_legend=1 00:22:48.924 --rc geninfo_all_blocks=1 00:22:48.924 --rc geninfo_unexecuted_blocks=1 00:22:48.924 00:22:48.924 ' 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.924 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:49.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.183 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:51.713 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.713 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:51.713 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:51.713 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:51.713 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:51.713 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:51.713 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:51.713 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:51.713 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:51.713 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:51.713 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:51.713 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:51.713 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:51.713 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:51.713 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:51.714 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:51.714 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:51.714 Found net devices under 0000:84:00.0: cvl_0_0 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:51.714 Found net devices under 0000:84:00.1: cvl_0_1 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:51.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:22:51.714 00:22:51.714 --- 10.0.0.2 ping statistics --- 00:22:51.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.714 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:22:51.714 00:22:51.714 --- 10.0.0.1 ping statistics --- 00:22:51.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.714 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=267712 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 267712 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 267712 ']' 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.714 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:51.715 [2024-12-06 19:21:36.396231] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:22:51.715 [2024-12-06 19:21:36.396330] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.715 [2024-12-06 19:21:36.470140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.715 [2024-12-06 19:21:36.528116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.715 [2024-12-06 19:21:36.528187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.715 [2024-12-06 19:21:36.528208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.715 [2024-12-06 19:21:36.528225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.715 [2024-12-06 19:21:36.528240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.715 [2024-12-06 19:21:36.529015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=267735 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=a674d86b-1eda-4d00-8067-d8e59c8a88c6 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=93be36f9-ddc5-4ae8-817c-3df7256a6493 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=1b549296-f80e-48b2-baf9-06c311665f39 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.715 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:51.715 null0 00:22:51.715 null1 00:22:51.715 null2 00:22:51.715 [2024-12-06 19:21:36.710299] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.715 [2024-12-06 19:21:36.728358] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:22:51.715 [2024-12-06 19:21:36.728444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid267735 ] 00:22:51.715 [2024-12-06 19:21:36.734537] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.973 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.973 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 267735 /var/tmp/tgt2.sock 00:22:51.973 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 267735 ']' 00:22:51.973 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:51.973 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.973 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:51.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:51.973 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.973 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:51.973 [2024-12-06 19:21:36.803569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.973 [2024-12-06 19:21:36.864225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.231 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.231 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:52.231 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:52.801 [2024-12-06 19:21:37.579135] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.801 [2024-12-06 19:21:37.595321] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:52.801 nvme0n1 nvme0n2 00:22:52.801 nvme1n1 00:22:52.801 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:52.801 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:52.802 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:53.372 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:53.372 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:53.372 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:53.372 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:53.372 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:53.372 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:53.372 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:53.372 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:53.372 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:53.372 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:53.372 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:53.372 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:53.372 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid a674d86b-1eda-4d00-8067-d8e59c8a88c6 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a674d86b1eda4d008067d8e59c8a88c6 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A674D86B1EDA4D008067D8E59C8A88C6 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ A674D86B1EDA4D008067D8E59C8A88C6 == \A\6\7\4\D\8\6\B\1\E\D\A\4\D\0\0\8\0\6\7\D\8\E\5\9\C\8\A\8\8\C\6 ]] 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 93be36f9-ddc5-4ae8-817c-3df7256a6493 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=93be36f9ddc54ae8817c3df7256a6493 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 93BE36F9DDC54AE8817C3DF7256A6493 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 93BE36F9DDC54AE8817C3DF7256A6493 == \9\3\B\E\3\6\F\9\D\D\C\5\4\A\E\8\8\1\7\C\3\D\F\7\2\5\6\A\6\4\9\3 ]] 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 1b549296-f80e-48b2-baf9-06c311665f39 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:54.308 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:54.568 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1b549296f80e48b2baf906c311665f39 00:22:54.568 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1B549296F80E48B2BAF906C311665F39 00:22:54.568 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 1B549296F80E48B2BAF906C311665F39 == \1\B\5\4\9\2\9\6\F\8\0\E\4\8\B\2\B\A\F\9\0\6\C\3\1\1\6\6\5\F\3\9 ]] 00:22:54.568 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:54.568 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:54.568 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:54.568 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 267735 00:22:54.568 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 267735 ']' 00:22:54.568 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 267735 00:22:54.568 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:54.568 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.568 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 267735 00:22:54.827 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:54.827 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:54.827 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 267735' 00:22:54.827 killing process with pid 267735 00:22:54.827 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 267735 00:22:54.827 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 267735 00:22:55.086 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:55.086 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:55.086 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:55.086 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:55.086 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:55.086 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:55.086 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:55.086 rmmod nvme_tcp 00:22:55.086 rmmod nvme_fabrics 00:22:55.086 rmmod nvme_keyring 00:22:55.086 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:55.087 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:55.087 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:55.087 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 267712 ']' 00:22:55.087 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 267712 00:22:55.087 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 267712 ']' 00:22:55.087 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 267712 00:22:55.087 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:55.087 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:55.087 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 267712 00:22:55.345 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:55.345 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:55.345 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 267712' 00:22:55.345 killing process with pid 267712 00:22:55.345 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 267712 00:22:55.345 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 267712 00:22:55.345 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:55.345 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:55.345 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:55.345 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:55.345 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:55.345 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:55.345 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:55.345 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:55.345 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:55.345 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.345 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.345 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.878 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:57.878 00:22:57.878 real 0m8.573s 00:22:57.878 user 0m8.546s 00:22:57.878 sys 0m2.706s 00:22:57.878 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:57.878 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:57.878 ************************************ 00:22:57.878 END TEST nvmf_nsid 00:22:57.878 ************************************ 00:22:57.878 19:21:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:57.878 00:22:57.878 real 11m47.668s 00:22:57.878 user 27m53.135s 00:22:57.878 sys 2m55.347s 00:22:57.878 19:21:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:57.878 19:21:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:57.878 ************************************ 00:22:57.878 END TEST nvmf_target_extra 00:22:57.878 ************************************ 00:22:57.878 19:21:42 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:57.878 19:21:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:57.878 19:21:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:57.878 19:21:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:57.878 ************************************ 00:22:57.878 START TEST nvmf_host 00:22:57.878 ************************************ 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:57.878 * Looking for test storage... 00:22:57.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:57.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.878 --rc genhtml_branch_coverage=1 00:22:57.878 --rc genhtml_function_coverage=1 00:22:57.878 --rc genhtml_legend=1 00:22:57.878 --rc geninfo_all_blocks=1 00:22:57.878 --rc geninfo_unexecuted_blocks=1 00:22:57.878 00:22:57.878 ' 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:57.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.878 --rc genhtml_branch_coverage=1 00:22:57.878 --rc genhtml_function_coverage=1 00:22:57.878 --rc genhtml_legend=1 00:22:57.878 --rc geninfo_all_blocks=1 00:22:57.878 --rc geninfo_unexecuted_blocks=1 00:22:57.878 00:22:57.878 ' 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:57.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.878 --rc genhtml_branch_coverage=1 00:22:57.878 --rc genhtml_function_coverage=1 00:22:57.878 --rc genhtml_legend=1 00:22:57.878 --rc geninfo_all_blocks=1 00:22:57.878 --rc geninfo_unexecuted_blocks=1 00:22:57.878 00:22:57.878 ' 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:57.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.878 --rc genhtml_branch_coverage=1 00:22:57.878 --rc genhtml_function_coverage=1 00:22:57.878 --rc genhtml_legend=1 00:22:57.878 --rc geninfo_all_blocks=1 00:22:57.878 --rc geninfo_unexecuted_blocks=1 00:22:57.878 00:22:57.878 ' 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.878 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:57.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.879 ************************************ 00:22:57.879 START TEST nvmf_multicontroller 00:22:57.879 ************************************ 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:57.879 * Looking for test storage... 00:22:57.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:57.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.879 --rc genhtml_branch_coverage=1 00:22:57.879 --rc genhtml_function_coverage=1 00:22:57.879 --rc genhtml_legend=1 00:22:57.879 --rc geninfo_all_blocks=1 00:22:57.879 --rc geninfo_unexecuted_blocks=1 00:22:57.879 00:22:57.879 ' 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:57.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.879 --rc genhtml_branch_coverage=1 00:22:57.879 --rc genhtml_function_coverage=1 00:22:57.879 --rc genhtml_legend=1 00:22:57.879 --rc geninfo_all_blocks=1 00:22:57.879 --rc geninfo_unexecuted_blocks=1 00:22:57.879 00:22:57.879 ' 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:57.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.879 --rc genhtml_branch_coverage=1 00:22:57.879 --rc genhtml_function_coverage=1 00:22:57.879 --rc genhtml_legend=1 00:22:57.879 --rc geninfo_all_blocks=1 00:22:57.879 --rc geninfo_unexecuted_blocks=1 00:22:57.879 00:22:57.879 ' 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:57.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.879 --rc genhtml_branch_coverage=1 00:22:57.879 --rc genhtml_function_coverage=1 00:22:57.879 --rc genhtml_legend=1 00:22:57.879 --rc geninfo_all_blocks=1 00:22:57.879 --rc geninfo_unexecuted_blocks=1 00:22:57.879 00:22:57.879 ' 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.879 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:57.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:57.880 19:21:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:00.443 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:00.443 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:00.443 Found net devices under 0000:84:00.0: cvl_0_0 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:00.443 Found net devices under 0000:84:00.1: cvl_0_1 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.443 19:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.443 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.443 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.443 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:00.443 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:00.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:23:00.444 00:23:00.444 --- 10.0.0.2 ping statistics --- 00:23:00.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.444 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:23:00.444 00:23:00.444 --- 10.0.0.1 ping statistics --- 00:23:00.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.444 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=270308 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 270308 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 270308 ']' 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.444 [2024-12-06 19:21:45.158442] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:23:00.444 [2024-12-06 19:21:45.158551] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.444 [2024-12-06 19:21:45.232298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:00.444 [2024-12-06 19:21:45.291751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.444 [2024-12-06 19:21:45.291818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.444 [2024-12-06 19:21:45.291845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.444 [2024-12-06 19:21:45.291856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.444 [2024-12-06 19:21:45.291866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.444 [2024-12-06 19:21:45.293575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.444 [2024-12-06 19:21:45.293641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.444 [2024-12-06 19:21:45.293637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.444 [2024-12-06 19:21:45.434864] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.444 Malloc0 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.444 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.703 [2024-12-06 19:21:45.499113] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.703 [2024-12-06 19:21:45.506929] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.703 Malloc1 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=270340 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 270340 /var/tmp/bdevperf.sock 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 270340 ']' 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.703 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.961 NVMe0n1 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.961 1 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.961 request: 00:23:00.961 { 00:23:00.961 "name": "NVMe0", 00:23:00.961 "trtype": "tcp", 00:23:00.961 "traddr": "10.0.0.2", 00:23:00.961 "adrfam": "ipv4", 00:23:00.961 "trsvcid": "4420", 00:23:00.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.961 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:00.961 "hostaddr": "10.0.0.1", 00:23:00.961 "prchk_reftag": false, 00:23:00.961 "prchk_guard": false, 00:23:00.961 "hdgst": false, 00:23:00.961 "ddgst": false, 00:23:00.961 "allow_unrecognized_csi": false, 00:23:00.961 "method": "bdev_nvme_attach_controller", 00:23:00.961 "req_id": 1 00:23:00.961 } 00:23:00.961 Got JSON-RPC error response 00:23:00.961 response: 00:23:00.961 { 00:23:00.961 "code": -114, 00:23:00.961 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:00.961 } 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.961 request: 00:23:00.961 { 00:23:00.961 "name": "NVMe0", 00:23:00.961 "trtype": "tcp", 00:23:00.961 "traddr": "10.0.0.2", 00:23:00.961 "adrfam": "ipv4", 00:23:00.961 "trsvcid": "4420", 00:23:00.961 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:00.961 "hostaddr": "10.0.0.1", 00:23:00.961 "prchk_reftag": false, 00:23:00.961 "prchk_guard": false, 00:23:00.961 "hdgst": false, 00:23:00.961 "ddgst": false, 00:23:00.961 "allow_unrecognized_csi": false, 00:23:00.961 "method": "bdev_nvme_attach_controller", 00:23:00.961 "req_id": 1 00:23:00.961 } 00:23:00.961 Got JSON-RPC error response 00:23:00.961 response: 00:23:00.961 { 00:23:00.961 "code": -114, 00:23:00.961 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:00.961 } 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:00.961 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:00.962 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.962 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:00.962 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.962 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:00.962 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.962 19:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.962 request: 00:23:00.962 { 00:23:00.962 "name": "NVMe0", 00:23:00.962 "trtype": "tcp", 00:23:00.962 "traddr": "10.0.0.2", 00:23:00.962 "adrfam": "ipv4", 00:23:00.962 "trsvcid": "4420", 00:23:00.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.962 "hostaddr": "10.0.0.1", 00:23:00.962 "prchk_reftag": false, 00:23:00.962 "prchk_guard": false, 00:23:00.962 "hdgst": false, 00:23:00.962 "ddgst": false, 00:23:00.962 "multipath": "disable", 00:23:00.962 "allow_unrecognized_csi": false, 00:23:00.962 "method": "bdev_nvme_attach_controller", 00:23:00.962 "req_id": 1 00:23:00.962 } 00:23:00.962 Got JSON-RPC error response 00:23:00.962 response: 00:23:00.962 { 00:23:00.962 "code": -114, 00:23:00.962 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:00.962 } 00:23:00.962 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:00.962 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:00.962 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.962 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.962 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.962 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:00.962 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:00.962 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:00.962 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:00.962 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.962 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:00.962 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.962 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:00.962 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.962 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.220 request: 00:23:01.220 { 00:23:01.220 "name": "NVMe0", 00:23:01.220 "trtype": "tcp", 00:23:01.220 "traddr": "10.0.0.2", 00:23:01.220 "adrfam": "ipv4", 00:23:01.220 "trsvcid": "4420", 00:23:01.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.220 "hostaddr": "10.0.0.1", 00:23:01.220 "prchk_reftag": false, 00:23:01.220 "prchk_guard": false, 00:23:01.220 "hdgst": false, 00:23:01.220 "ddgst": false, 00:23:01.220 "multipath": "failover", 00:23:01.220 "allow_unrecognized_csi": false, 00:23:01.220 "method": "bdev_nvme_attach_controller", 00:23:01.220 "req_id": 1 00:23:01.220 } 00:23:01.220 Got JSON-RPC error response 00:23:01.220 response: 00:23:01.220 { 00:23:01.220 "code": -114, 00:23:01.220 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:01.220 } 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.220 NVMe0n1 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.220 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:01.220 19:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:02.596 { 00:23:02.596 "results": [ 00:23:02.596 { 00:23:02.596 "job": "NVMe0n1", 00:23:02.596 "core_mask": "0x1", 00:23:02.596 "workload": "write", 00:23:02.596 "status": "finished", 00:23:02.596 "queue_depth": 128, 00:23:02.596 "io_size": 4096, 00:23:02.596 "runtime": 1.006236, 00:23:02.596 "iops": 18871.318458095317, 00:23:02.596 "mibps": 73.71608772693483, 00:23:02.596 "io_failed": 0, 00:23:02.596 "io_timeout": 0, 00:23:02.596 "avg_latency_us": 6766.4658052712775, 00:23:02.596 "min_latency_us": 2026.7614814814815, 00:23:02.596 "max_latency_us": 12330.477037037037 00:23:02.596 } 00:23:02.596 ], 00:23:02.596 "core_count": 1 00:23:02.596 } 00:23:02.596 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:02.596 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.596 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.596 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.596 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 270340 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 270340 ']' 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 270340 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 270340 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 270340' 00:23:02.597 killing process with pid 270340 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 270340 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 270340 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:02.597 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:02.597 [2024-12-06 19:21:45.611109] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:23:02.597 [2024-12-06 19:21:45.611195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid270340 ] 00:23:02.597 [2024-12-06 19:21:45.680361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.597 [2024-12-06 19:21:45.739428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.597 [2024-12-06 19:21:46.197321] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 1a03bee9-b44b-4e6f-92ab-7926c774935a already exists 00:23:02.597 [2024-12-06 19:21:46.197359] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:1a03bee9-b44b-4e6f-92ab-7926c774935a alias for bdev NVMe1n1 00:23:02.597 [2024-12-06 19:21:46.197382] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:02.597 Running I/O for 1 seconds... 00:23:02.597 18797.00 IOPS, 73.43 MiB/s 00:23:02.597 Latency(us) 00:23:02.597 [2024-12-06T18:21:47.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.597 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:02.597 NVMe0n1 : 1.01 18871.32 73.72 0.00 0.00 6766.47 2026.76 12330.48 00:23:02.597 [2024-12-06T18:21:47.646Z] =================================================================================================================== 00:23:02.597 [2024-12-06T18:21:47.646Z] Total : 18871.32 73.72 0.00 0.00 6766.47 2026.76 12330.48 00:23:02.597 Received shutdown signal, test time was about 1.000000 seconds 00:23:02.597 00:23:02.597 Latency(us) 00:23:02.597 [2024-12-06T18:21:47.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.597 [2024-12-06T18:21:47.646Z] =================================================================================================================== 00:23:02.597 [2024-12-06T18:21:47.646Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:02.597 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:02.597 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:02.597 rmmod nvme_tcp 00:23:02.597 rmmod nvme_fabrics 00:23:02.855 rmmod nvme_keyring 00:23:02.855 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:02.855 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:02.855 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:02.855 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 270308 ']' 00:23:02.855 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 270308 00:23:02.855 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 270308 ']' 00:23:02.855 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 270308 00:23:02.855 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:02.855 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.855 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 270308 00:23:02.855 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:02.855 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:02.855 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 270308' 00:23:02.855 killing process with pid 270308 00:23:02.855 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 270308 00:23:02.855 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 270308 00:23:03.114 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:03.114 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:03.114 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:03.114 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:03.114 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:03.114 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:03.114 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:03.114 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:03.114 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:03.114 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.114 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.114 19:21:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.024 19:21:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:05.024 00:23:05.024 real 0m7.353s 00:23:05.024 user 0m10.892s 00:23:05.024 sys 0m2.392s 00:23:05.024 19:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.024 19:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.024 ************************************ 00:23:05.024 END TEST nvmf_multicontroller 00:23:05.024 ************************************ 00:23:05.024 19:21:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:05.024 19:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:05.024 19:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.024 19:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.024 ************************************ 00:23:05.024 START TEST nvmf_aer 00:23:05.024 ************************************ 00:23:05.024 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:05.283 * Looking for test storage... 00:23:05.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:05.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.283 --rc genhtml_branch_coverage=1 00:23:05.283 --rc genhtml_function_coverage=1 00:23:05.283 --rc genhtml_legend=1 00:23:05.283 --rc geninfo_all_blocks=1 00:23:05.283 --rc geninfo_unexecuted_blocks=1 00:23:05.283 00:23:05.283 ' 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:05.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.283 --rc genhtml_branch_coverage=1 00:23:05.283 --rc genhtml_function_coverage=1 00:23:05.283 --rc genhtml_legend=1 00:23:05.283 --rc geninfo_all_blocks=1 00:23:05.283 --rc geninfo_unexecuted_blocks=1 00:23:05.283 00:23:05.283 ' 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:05.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.283 --rc genhtml_branch_coverage=1 00:23:05.283 --rc genhtml_function_coverage=1 00:23:05.283 --rc genhtml_legend=1 00:23:05.283 --rc geninfo_all_blocks=1 00:23:05.283 --rc geninfo_unexecuted_blocks=1 00:23:05.283 00:23:05.283 ' 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:05.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.283 --rc genhtml_branch_coverage=1 00:23:05.283 --rc genhtml_function_coverage=1 00:23:05.283 --rc genhtml_legend=1 00:23:05.283 --rc geninfo_all_blocks=1 00:23:05.283 --rc geninfo_unexecuted_blocks=1 00:23:05.283 00:23:05.283 ' 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.283 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:05.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:05.284 19:21:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:07.824 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:07.824 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:07.824 Found net devices under 0000:84:00.0: cvl_0_0 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:07.824 Found net devices under 0000:84:00.1: cvl_0_1 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:07.824 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:07.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:23:07.825 00:23:07.825 --- 10.0.0.2 ping statistics --- 00:23:07.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.825 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:23:07.825 00:23:07.825 --- 10.0.0.1 ping statistics --- 00:23:07.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.825 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=272574 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 272574 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 272574 ']' 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.825 [2024-12-06 19:21:52.587821] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:23:07.825 [2024-12-06 19:21:52.587894] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.825 [2024-12-06 19:21:52.660034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.825 [2024-12-06 19:21:52.721591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.825 [2024-12-06 19:21:52.721645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.825 [2024-12-06 19:21:52.721669] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.825 [2024-12-06 19:21:52.721687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.825 [2024-12-06 19:21:52.721716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.825 [2024-12-06 19:21:52.723556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.825 [2024-12-06 19:21:52.723647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.825 [2024-12-06 19:21:52.723807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.825 [2024-12-06 19:21:52.723817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.825 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.825 [2024-12-06 19:21:52.865555] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.085 Malloc0 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.085 [2024-12-06 19:21:52.933312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.085 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.085 [ 00:23:08.085 { 00:23:08.085 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:08.085 "subtype": "Discovery", 00:23:08.085 "listen_addresses": [], 00:23:08.085 "allow_any_host": true, 00:23:08.085 "hosts": [] 00:23:08.085 }, 00:23:08.085 { 00:23:08.085 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.085 "subtype": "NVMe", 00:23:08.085 "listen_addresses": [ 00:23:08.085 { 00:23:08.085 "trtype": "TCP", 00:23:08.085 "adrfam": "IPv4", 00:23:08.085 "traddr": "10.0.0.2", 00:23:08.085 "trsvcid": "4420" 00:23:08.086 } 00:23:08.086 ], 00:23:08.086 "allow_any_host": true, 00:23:08.086 "hosts": [], 00:23:08.086 "serial_number": "SPDK00000000000001", 00:23:08.086 "model_number": "SPDK bdev Controller", 00:23:08.086 "max_namespaces": 2, 00:23:08.086 "min_cntlid": 1, 00:23:08.086 "max_cntlid": 65519, 00:23:08.086 "namespaces": [ 00:23:08.086 { 00:23:08.086 "nsid": 1, 00:23:08.086 "bdev_name": "Malloc0", 00:23:08.086 "name": "Malloc0", 00:23:08.086 "nguid": "12FF87DB0FE043B5B7029CD62DB60F72", 00:23:08.086 "uuid": "12ff87db-0fe0-43b5-b702-9cd62db60f72" 00:23:08.086 } 00:23:08.086 ] 00:23:08.086 } 00:23:08.086 ] 00:23:08.086 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.086 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:08.086 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:08.086 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=272718 00:23:08.086 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:08.086 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:08.086 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:08.086 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.086 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:08.086 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:08.086 19:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:08.086 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.086 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:08.086 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:08.086 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.345 Malloc1 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.345 [ 00:23:08.345 { 00:23:08.345 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:08.345 "subtype": "Discovery", 00:23:08.345 "listen_addresses": [], 00:23:08.345 "allow_any_host": true, 00:23:08.345 "hosts": [] 00:23:08.345 }, 00:23:08.345 { 00:23:08.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.345 "subtype": "NVMe", 00:23:08.345 "listen_addresses": [ 00:23:08.345 { 00:23:08.345 "trtype": "TCP", 00:23:08.345 "adrfam": "IPv4", 00:23:08.345 "traddr": "10.0.0.2", 00:23:08.345 "trsvcid": "4420" 00:23:08.345 } 00:23:08.345 ], 00:23:08.345 "allow_any_host": true, 00:23:08.345 "hosts": [], 00:23:08.345 "serial_number": "SPDK00000000000001", 00:23:08.345 "model_number": "SPDK bdev Controller", 00:23:08.345 "max_namespaces": 2, 00:23:08.345 "min_cntlid": 1, 00:23:08.345 "max_cntlid": 65519, 00:23:08.345 "namespaces": [ 00:23:08.345 { 00:23:08.345 "nsid": 1, 00:23:08.345 "bdev_name": "Malloc0", 00:23:08.345 "name": "Malloc0", 00:23:08.345 "nguid": "12FF87DB0FE043B5B7029CD62DB60F72", 00:23:08.345 "uuid": "12ff87db-0fe0-43b5-b702-9cd62db60f72" 00:23:08.345 }, 00:23:08.345 { 00:23:08.345 "nsid": 2, 00:23:08.345 "bdev_name": "Malloc1", 00:23:08.345 "name": "Malloc1", 00:23:08.345 "nguid": "38AF2673EDC348759540F0C96181607D", 00:23:08.345 "uuid": "38af2673-edc3-4875-9540-f0c96181607d" 00:23:08.345 } 00:23:08.345 ] 00:23:08.345 } 00:23:08.345 ] 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 272718 00:23:08.345 Asynchronous Event Request test 00:23:08.345 Attaching to 10.0.0.2 00:23:08.345 Attached to 10.0.0.2 00:23:08.345 Registering asynchronous event callbacks... 00:23:08.345 Starting namespace attribute notice tests for all controllers... 00:23:08.345 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:08.345 aer_cb - Changed Namespace 00:23:08.345 Cleaning up... 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:08.345 rmmod nvme_tcp 00:23:08.345 rmmod nvme_fabrics 00:23:08.345 rmmod nvme_keyring 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 272574 ']' 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 272574 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 272574 ']' 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 272574 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.345 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 272574 00:23:08.604 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:08.604 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:08.604 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 272574' 00:23:08.604 killing process with pid 272574 00:23:08.604 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 272574 00:23:08.604 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 272574 00:23:08.604 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:08.604 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:08.604 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:08.604 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:08.604 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:08.604 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:08.604 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:08.604 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:08.604 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:08.604 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.604 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.604 19:21:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:11.141 00:23:11.141 real 0m5.637s 00:23:11.141 user 0m4.467s 00:23:11.141 sys 0m2.059s 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.141 ************************************ 00:23:11.141 END TEST nvmf_aer 00:23:11.141 ************************************ 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.141 ************************************ 00:23:11.141 START TEST nvmf_async_init 00:23:11.141 ************************************ 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:11.141 * Looking for test storage... 00:23:11.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:11.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.141 --rc genhtml_branch_coverage=1 00:23:11.141 --rc genhtml_function_coverage=1 00:23:11.141 --rc genhtml_legend=1 00:23:11.141 --rc geninfo_all_blocks=1 00:23:11.141 --rc geninfo_unexecuted_blocks=1 00:23:11.141 00:23:11.141 ' 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:11.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.141 --rc genhtml_branch_coverage=1 00:23:11.141 --rc genhtml_function_coverage=1 00:23:11.141 --rc genhtml_legend=1 00:23:11.141 --rc geninfo_all_blocks=1 00:23:11.141 --rc geninfo_unexecuted_blocks=1 00:23:11.141 00:23:11.141 ' 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:11.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.141 --rc genhtml_branch_coverage=1 00:23:11.141 --rc genhtml_function_coverage=1 00:23:11.141 --rc genhtml_legend=1 00:23:11.141 --rc geninfo_all_blocks=1 00:23:11.141 --rc geninfo_unexecuted_blocks=1 00:23:11.141 00:23:11.141 ' 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:11.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.141 --rc genhtml_branch_coverage=1 00:23:11.141 --rc genhtml_function_coverage=1 00:23:11.141 --rc genhtml_legend=1 00:23:11.141 --rc geninfo_all_blocks=1 00:23:11.141 --rc geninfo_unexecuted_blocks=1 00:23:11.141 00:23:11.141 ' 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.141 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:11.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=26ec2661c7df4a0e83d49b38177890c7 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:11.142 19:21:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:13.676 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:13.676 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:13.676 Found net devices under 0000:84:00.0: cvl_0_0 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.676 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:13.677 Found net devices under 0000:84:00.1: cvl_0_1 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:13.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:23:13.677 00:23:13.677 --- 10.0.0.2 ping statistics --- 00:23:13.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.677 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:23:13.677 00:23:13.677 --- 10.0.0.1 ping statistics --- 00:23:13.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.677 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=274679 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 274679 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 274679 ']' 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.677 [2024-12-06 19:21:58.327838] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:23:13.677 [2024-12-06 19:21:58.327937] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.677 [2024-12-06 19:21:58.402355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.677 [2024-12-06 19:21:58.459737] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.677 [2024-12-06 19:21:58.459812] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.677 [2024-12-06 19:21:58.459834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.677 [2024-12-06 19:21:58.459851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.677 [2024-12-06 19:21:58.459865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.677 [2024-12-06 19:21:58.460561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.677 [2024-12-06 19:21:58.608262] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.677 null0 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 26ec2661c7df4a0e83d49b38177890c7 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.677 [2024-12-06 19:21:58.648569] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.677 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.938 nvme0n1 00:23:13.938 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.938 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:13.938 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.938 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.938 [ 00:23:13.938 { 00:23:13.938 "name": "nvme0n1", 00:23:13.938 "aliases": [ 00:23:13.938 "26ec2661-c7df-4a0e-83d4-9b38177890c7" 00:23:13.938 ], 00:23:13.938 "product_name": "NVMe disk", 00:23:13.938 "block_size": 512, 00:23:13.938 "num_blocks": 2097152, 00:23:13.938 "uuid": "26ec2661-c7df-4a0e-83d4-9b38177890c7", 00:23:13.938 "numa_id": 1, 00:23:13.939 "assigned_rate_limits": { 00:23:13.939 "rw_ios_per_sec": 0, 00:23:13.939 "rw_mbytes_per_sec": 0, 00:23:13.939 "r_mbytes_per_sec": 0, 00:23:13.939 "w_mbytes_per_sec": 0 00:23:13.939 }, 00:23:13.939 "claimed": false, 00:23:13.939 "zoned": false, 00:23:13.939 "supported_io_types": { 00:23:13.939 "read": true, 00:23:13.939 "write": true, 00:23:13.939 "unmap": false, 00:23:13.939 "flush": true, 00:23:13.939 "reset": true, 00:23:13.939 "nvme_admin": true, 00:23:13.939 "nvme_io": true, 00:23:13.939 "nvme_io_md": false, 00:23:13.939 "write_zeroes": true, 00:23:13.939 "zcopy": false, 00:23:13.939 "get_zone_info": false, 00:23:13.939 "zone_management": false, 00:23:13.939 "zone_append": false, 00:23:13.939 "compare": true, 00:23:13.939 "compare_and_write": true, 00:23:13.939 "abort": true, 00:23:13.939 "seek_hole": false, 00:23:13.939 "seek_data": false, 00:23:13.939 "copy": true, 00:23:13.939 "nvme_iov_md": false 00:23:13.939 }, 00:23:13.939 "memory_domains": [ 00:23:13.939 { 00:23:13.939 "dma_device_id": "system", 00:23:13.939 "dma_device_type": 1 00:23:13.939 } 00:23:13.939 ], 00:23:13.939 "driver_specific": { 00:23:13.939 "nvme": [ 00:23:13.939 { 00:23:13.939 "trid": { 00:23:13.939 "trtype": "TCP", 00:23:13.939 "adrfam": "IPv4", 00:23:13.939 "traddr": "10.0.0.2", 00:23:13.939 "trsvcid": "4420", 00:23:13.939 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:13.939 }, 00:23:13.939 "ctrlr_data": { 00:23:13.939 "cntlid": 1, 00:23:13.939 "vendor_id": "0x8086", 00:23:13.939 "model_number": "SPDK bdev Controller", 00:23:13.939 "serial_number": "00000000000000000000", 00:23:13.939 "firmware_revision": "25.01", 00:23:13.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.939 "oacs": { 00:23:13.939 "security": 0, 00:23:13.939 "format": 0, 00:23:13.939 "firmware": 0, 00:23:13.939 "ns_manage": 0 00:23:13.939 }, 00:23:13.939 "multi_ctrlr": true, 00:23:13.939 "ana_reporting": false 00:23:13.939 }, 00:23:13.939 "vs": { 00:23:13.939 "nvme_version": "1.3" 00:23:13.939 }, 00:23:13.939 "ns_data": { 00:23:13.939 "id": 1, 00:23:13.939 "can_share": true 00:23:13.939 } 00:23:13.939 } 00:23:13.939 ], 00:23:13.939 "mp_policy": "active_passive" 00:23:13.939 } 00:23:13.939 } 00:23:13.939 ] 00:23:13.939 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.939 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:13.939 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.939 19:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.939 [2024-12-06 19:21:58.897049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:13.939 [2024-12-06 19:21:58.897149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205be40 (9): Bad file descriptor 00:23:14.199 [2024-12-06 19:21:59.028845] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:14.199 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.199 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:14.199 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.199 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.199 [ 00:23:14.199 { 00:23:14.199 "name": "nvme0n1", 00:23:14.199 "aliases": [ 00:23:14.199 "26ec2661-c7df-4a0e-83d4-9b38177890c7" 00:23:14.199 ], 00:23:14.199 "product_name": "NVMe disk", 00:23:14.199 "block_size": 512, 00:23:14.199 "num_blocks": 2097152, 00:23:14.199 "uuid": "26ec2661-c7df-4a0e-83d4-9b38177890c7", 00:23:14.199 "numa_id": 1, 00:23:14.199 "assigned_rate_limits": { 00:23:14.199 "rw_ios_per_sec": 0, 00:23:14.199 "rw_mbytes_per_sec": 0, 00:23:14.199 "r_mbytes_per_sec": 0, 00:23:14.199 "w_mbytes_per_sec": 0 00:23:14.199 }, 00:23:14.199 "claimed": false, 00:23:14.199 "zoned": false, 00:23:14.199 "supported_io_types": { 00:23:14.199 "read": true, 00:23:14.199 "write": true, 00:23:14.199 "unmap": false, 00:23:14.199 "flush": true, 00:23:14.199 "reset": true, 00:23:14.199 "nvme_admin": true, 00:23:14.199 "nvme_io": true, 00:23:14.199 "nvme_io_md": false, 00:23:14.199 "write_zeroes": true, 00:23:14.199 "zcopy": false, 00:23:14.199 "get_zone_info": false, 00:23:14.199 "zone_management": false, 00:23:14.199 "zone_append": false, 00:23:14.199 "compare": true, 00:23:14.199 "compare_and_write": true, 00:23:14.199 "abort": true, 00:23:14.199 "seek_hole": false, 00:23:14.199 "seek_data": false, 00:23:14.199 "copy": true, 00:23:14.199 "nvme_iov_md": false 00:23:14.199 }, 00:23:14.199 "memory_domains": [ 00:23:14.199 { 00:23:14.199 "dma_device_id": "system", 00:23:14.199 "dma_device_type": 1 00:23:14.199 } 00:23:14.199 ], 00:23:14.199 "driver_specific": { 00:23:14.199 "nvme": [ 00:23:14.199 { 00:23:14.199 "trid": { 00:23:14.199 "trtype": "TCP", 00:23:14.199 "adrfam": "IPv4", 00:23:14.199 "traddr": "10.0.0.2", 00:23:14.199 "trsvcid": "4420", 00:23:14.199 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:14.199 }, 00:23:14.199 "ctrlr_data": { 00:23:14.199 "cntlid": 2, 00:23:14.199 "vendor_id": "0x8086", 00:23:14.199 "model_number": "SPDK bdev Controller", 00:23:14.199 "serial_number": "00000000000000000000", 00:23:14.199 "firmware_revision": "25.01", 00:23:14.199 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:14.199 "oacs": { 00:23:14.199 "security": 0, 00:23:14.199 "format": 0, 00:23:14.199 "firmware": 0, 00:23:14.199 "ns_manage": 0 00:23:14.199 }, 00:23:14.199 "multi_ctrlr": true, 00:23:14.199 "ana_reporting": false 00:23:14.199 }, 00:23:14.199 "vs": { 00:23:14.199 "nvme_version": "1.3" 00:23:14.199 }, 00:23:14.199 "ns_data": { 00:23:14.199 "id": 1, 00:23:14.199 "can_share": true 00:23:14.199 } 00:23:14.199 } 00:23:14.199 ], 00:23:14.199 "mp_policy": "active_passive" 00:23:14.199 } 00:23:14.199 } 00:23:14.199 ] 00:23:14.199 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.199 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.199 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.BPVmxt3SHH 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.BPVmxt3SHH 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.BPVmxt3SHH 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 [2024-12-06 19:21:59.081604] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:14.200 [2024-12-06 19:21:59.081764] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 [2024-12-06 19:21:59.097653] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.200 nvme0n1 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 [ 00:23:14.200 { 00:23:14.200 "name": "nvme0n1", 00:23:14.200 "aliases": [ 00:23:14.200 "26ec2661-c7df-4a0e-83d4-9b38177890c7" 00:23:14.200 ], 00:23:14.200 "product_name": "NVMe disk", 00:23:14.200 "block_size": 512, 00:23:14.200 "num_blocks": 2097152, 00:23:14.200 "uuid": "26ec2661-c7df-4a0e-83d4-9b38177890c7", 00:23:14.200 "numa_id": 1, 00:23:14.200 "assigned_rate_limits": { 00:23:14.200 "rw_ios_per_sec": 0, 00:23:14.200 "rw_mbytes_per_sec": 0, 00:23:14.200 "r_mbytes_per_sec": 0, 00:23:14.200 "w_mbytes_per_sec": 0 00:23:14.200 }, 00:23:14.200 "claimed": false, 00:23:14.200 "zoned": false, 00:23:14.200 "supported_io_types": { 00:23:14.200 "read": true, 00:23:14.200 "write": true, 00:23:14.200 "unmap": false, 00:23:14.200 "flush": true, 00:23:14.200 "reset": true, 00:23:14.200 "nvme_admin": true, 00:23:14.200 "nvme_io": true, 00:23:14.200 "nvme_io_md": false, 00:23:14.200 "write_zeroes": true, 00:23:14.200 "zcopy": false, 00:23:14.200 "get_zone_info": false, 00:23:14.200 "zone_management": false, 00:23:14.200 "zone_append": false, 00:23:14.200 "compare": true, 00:23:14.200 "compare_and_write": true, 00:23:14.200 "abort": true, 00:23:14.200 "seek_hole": false, 00:23:14.200 "seek_data": false, 00:23:14.200 "copy": true, 00:23:14.200 "nvme_iov_md": false 00:23:14.200 }, 00:23:14.200 "memory_domains": [ 00:23:14.200 { 00:23:14.200 "dma_device_id": "system", 00:23:14.200 "dma_device_type": 1 00:23:14.200 } 00:23:14.200 ], 00:23:14.200 "driver_specific": { 00:23:14.200 "nvme": [ 00:23:14.200 { 00:23:14.200 "trid": { 00:23:14.200 "trtype": "TCP", 00:23:14.200 "adrfam": "IPv4", 00:23:14.200 "traddr": "10.0.0.2", 00:23:14.200 "trsvcid": "4421", 00:23:14.200 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:14.200 }, 00:23:14.200 "ctrlr_data": { 00:23:14.200 "cntlid": 3, 00:23:14.200 "vendor_id": "0x8086", 00:23:14.200 "model_number": "SPDK bdev Controller", 00:23:14.200 "serial_number": "00000000000000000000", 00:23:14.200 "firmware_revision": "25.01", 00:23:14.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:14.200 "oacs": { 00:23:14.200 "security": 0, 00:23:14.200 "format": 0, 00:23:14.200 "firmware": 0, 00:23:14.200 "ns_manage": 0 00:23:14.200 }, 00:23:14.200 "multi_ctrlr": true, 00:23:14.200 "ana_reporting": false 00:23:14.200 }, 00:23:14.200 "vs": { 00:23:14.200 "nvme_version": "1.3" 00:23:14.200 }, 00:23:14.200 "ns_data": { 00:23:14.200 "id": 1, 00:23:14.200 "can_share": true 00:23:14.200 } 00:23:14.200 } 00:23:14.200 ], 00:23:14.200 "mp_policy": "active_passive" 00:23:14.200 } 00:23:14.200 } 00:23:14.200 ] 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.BPVmxt3SHH 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:14.200 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:14.200 rmmod nvme_tcp 00:23:14.200 rmmod nvme_fabrics 00:23:14.200 rmmod nvme_keyring 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 274679 ']' 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 274679 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 274679 ']' 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 274679 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 274679 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 274679' 00:23:14.459 killing process with pid 274679 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 274679 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 274679 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.459 19:21:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:17.011 00:23:17.011 real 0m5.775s 00:23:17.011 user 0m2.224s 00:23:17.011 sys 0m1.988s 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:17.011 ************************************ 00:23:17.011 END TEST nvmf_async_init 00:23:17.011 ************************************ 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.011 ************************************ 00:23:17.011 START TEST dma 00:23:17.011 ************************************ 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:17.011 * Looking for test storage... 00:23:17.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:17.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.011 --rc genhtml_branch_coverage=1 00:23:17.011 --rc genhtml_function_coverage=1 00:23:17.011 --rc genhtml_legend=1 00:23:17.011 --rc geninfo_all_blocks=1 00:23:17.011 --rc geninfo_unexecuted_blocks=1 00:23:17.011 00:23:17.011 ' 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:17.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.011 --rc genhtml_branch_coverage=1 00:23:17.011 --rc genhtml_function_coverage=1 00:23:17.011 --rc genhtml_legend=1 00:23:17.011 --rc geninfo_all_blocks=1 00:23:17.011 --rc geninfo_unexecuted_blocks=1 00:23:17.011 00:23:17.011 ' 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:17.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.011 --rc genhtml_branch_coverage=1 00:23:17.011 --rc genhtml_function_coverage=1 00:23:17.011 --rc genhtml_legend=1 00:23:17.011 --rc geninfo_all_blocks=1 00:23:17.011 --rc geninfo_unexecuted_blocks=1 00:23:17.011 00:23:17.011 ' 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:17.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.011 --rc genhtml_branch_coverage=1 00:23:17.011 --rc genhtml_function_coverage=1 00:23:17.011 --rc genhtml_legend=1 00:23:17.011 --rc geninfo_all_blocks=1 00:23:17.011 --rc geninfo_unexecuted_blocks=1 00:23:17.011 00:23:17.011 ' 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:17.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:17.011 19:22:01 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:17.011 00:23:17.011 real 0m0.160s 00:23:17.011 user 0m0.107s 00:23:17.011 sys 0m0.062s 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:17.012 ************************************ 00:23:17.012 END TEST dma 00:23:17.012 ************************************ 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.012 ************************************ 00:23:17.012 START TEST nvmf_identify 00:23:17.012 ************************************ 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:17.012 * Looking for test storage... 00:23:17.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:17.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.012 --rc genhtml_branch_coverage=1 00:23:17.012 --rc genhtml_function_coverage=1 00:23:17.012 --rc genhtml_legend=1 00:23:17.012 --rc geninfo_all_blocks=1 00:23:17.012 --rc geninfo_unexecuted_blocks=1 00:23:17.012 00:23:17.012 ' 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:17.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.012 --rc genhtml_branch_coverage=1 00:23:17.012 --rc genhtml_function_coverage=1 00:23:17.012 --rc genhtml_legend=1 00:23:17.012 --rc geninfo_all_blocks=1 00:23:17.012 --rc geninfo_unexecuted_blocks=1 00:23:17.012 00:23:17.012 ' 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:17.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.012 --rc genhtml_branch_coverage=1 00:23:17.012 --rc genhtml_function_coverage=1 00:23:17.012 --rc genhtml_legend=1 00:23:17.012 --rc geninfo_all_blocks=1 00:23:17.012 --rc geninfo_unexecuted_blocks=1 00:23:17.012 00:23:17.012 ' 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:17.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.012 --rc genhtml_branch_coverage=1 00:23:17.012 --rc genhtml_function_coverage=1 00:23:17.012 --rc genhtml_legend=1 00:23:17.012 --rc geninfo_all_blocks=1 00:23:17.012 --rc geninfo_unexecuted_blocks=1 00:23:17.012 00:23:17.012 ' 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:17.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:17.012 19:22:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:19.548 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:19.548 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:19.548 Found net devices under 0000:84:00.0: cvl_0_0 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:19.548 Found net devices under 0000:84:00.1: cvl_0_1 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:19.548 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:19.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:23:19.548 00:23:19.548 --- 10.0.0.2 ping statistics --- 00:23:19.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.548 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:19.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:23:19.549 00:23:19.549 --- 10.0.0.1 ping statistics --- 00:23:19.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.549 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=276954 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 276954 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 276954 ']' 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.549 [2024-12-06 19:22:04.256486] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:23:19.549 [2024-12-06 19:22:04.256577] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.549 [2024-12-06 19:22:04.326658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:19.549 [2024-12-06 19:22:04.381685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.549 [2024-12-06 19:22:04.381762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.549 [2024-12-06 19:22:04.381783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.549 [2024-12-06 19:22:04.381801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.549 [2024-12-06 19:22:04.381816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.549 [2024-12-06 19:22:04.383452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.549 [2024-12-06 19:22:04.383559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.549 [2024-12-06 19:22:04.383636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:19.549 [2024-12-06 19:22:04.383639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.549 [2024-12-06 19:22:04.507036] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.549 Malloc0 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.549 [2024-12-06 19:22:04.590084] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.549 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.812 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.812 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:19.812 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.812 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.812 [ 00:23:19.812 { 00:23:19.812 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:19.812 "subtype": "Discovery", 00:23:19.812 "listen_addresses": [ 00:23:19.812 { 00:23:19.812 "trtype": "TCP", 00:23:19.812 "adrfam": "IPv4", 00:23:19.812 "traddr": "10.0.0.2", 00:23:19.812 "trsvcid": "4420" 00:23:19.812 } 00:23:19.812 ], 00:23:19.812 "allow_any_host": true, 00:23:19.812 "hosts": [] 00:23:19.812 }, 00:23:19.812 { 00:23:19.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.812 "subtype": "NVMe", 00:23:19.812 "listen_addresses": [ 00:23:19.812 { 00:23:19.812 "trtype": "TCP", 00:23:19.812 "adrfam": "IPv4", 00:23:19.812 "traddr": "10.0.0.2", 00:23:19.812 "trsvcid": "4420" 00:23:19.812 } 00:23:19.812 ], 00:23:19.812 "allow_any_host": true, 00:23:19.812 "hosts": [], 00:23:19.812 "serial_number": "SPDK00000000000001", 00:23:19.812 "model_number": "SPDK bdev Controller", 00:23:19.812 "max_namespaces": 32, 00:23:19.812 "min_cntlid": 1, 00:23:19.812 "max_cntlid": 65519, 00:23:19.812 "namespaces": [ 00:23:19.812 { 00:23:19.812 "nsid": 1, 00:23:19.812 "bdev_name": "Malloc0", 00:23:19.812 "name": "Malloc0", 00:23:19.812 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:19.812 "eui64": "ABCDEF0123456789", 00:23:19.812 "uuid": "c6a27bb6-14cc-484a-b1df-5b3e365ac305" 00:23:19.812 } 00:23:19.812 ] 00:23:19.812 } 00:23:19.812 ] 00:23:19.812 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.812 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:19.812 [2024-12-06 19:22:04.631289] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:23:19.812 [2024-12-06 19:22:04.631336] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276981 ] 00:23:19.812 [2024-12-06 19:22:04.678303] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:19.812 [2024-12-06 19:22:04.678374] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:19.812 [2024-12-06 19:22:04.678385] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:19.812 [2024-12-06 19:22:04.678402] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:19.812 [2024-12-06 19:22:04.678415] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:19.812 [2024-12-06 19:22:04.686203] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:19.812 [2024-12-06 19:22:04.686268] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf07690 0 00:23:19.812 [2024-12-06 19:22:04.686495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:19.812 [2024-12-06 19:22:04.686514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:19.812 [2024-12-06 19:22:04.686530] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:19.812 [2024-12-06 19:22:04.686535] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:19.812 [2024-12-06 19:22:04.686579] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.812 [2024-12-06 19:22:04.686591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.812 [2024-12-06 19:22:04.686599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf07690) 00:23:19.812 [2024-12-06 19:22:04.686617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:19.812 [2024-12-06 19:22:04.686642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69100, cid 0, qid 0 00:23:19.812 [2024-12-06 19:22:04.693738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.812 [2024-12-06 19:22:04.693756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.812 [2024-12-06 19:22:04.693763] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.812 [2024-12-06 19:22:04.693771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69100) on tqpair=0xf07690 00:23:19.812 [2024-12-06 19:22:04.693788] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:19.812 [2024-12-06 19:22:04.693809] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:19.812 [2024-12-06 19:22:04.693819] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:19.812 [2024-12-06 19:22:04.693841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.812 [2024-12-06 19:22:04.693850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.812 [2024-12-06 19:22:04.693856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf07690) 00:23:19.812 [2024-12-06 19:22:04.693867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.812 [2024-12-06 19:22:04.693891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69100, cid 0, qid 0 00:23:19.812 [2024-12-06 19:22:04.694126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.812 [2024-12-06 19:22:04.694138] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.812 [2024-12-06 19:22:04.694144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.812 [2024-12-06 19:22:04.694155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69100) on tqpair=0xf07690 00:23:19.812 [2024-12-06 19:22:04.694165] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:19.812 [2024-12-06 19:22:04.694177] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:19.813 [2024-12-06 19:22:04.694189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.694196] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.694202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf07690) 00:23:19.813 [2024-12-06 19:22:04.694212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.813 [2024-12-06 19:22:04.694232] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69100, cid 0, qid 0 00:23:19.813 [2024-12-06 19:22:04.694309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.813 [2024-12-06 19:22:04.694323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.813 [2024-12-06 19:22:04.694329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.694335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69100) on tqpair=0xf07690 00:23:19.813 [2024-12-06 19:22:04.694355] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:19.813 [2024-12-06 19:22:04.694369] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:19.813 [2024-12-06 19:22:04.694380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.694388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.694393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf07690) 00:23:19.813 [2024-12-06 19:22:04.694403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.813 [2024-12-06 19:22:04.694423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69100, cid 0, qid 0 00:23:19.813 [2024-12-06 19:22:04.694495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.813 [2024-12-06 19:22:04.694506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.813 [2024-12-06 19:22:04.694512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.694518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69100) on tqpair=0xf07690 00:23:19.813 [2024-12-06 19:22:04.694527] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:19.813 [2024-12-06 19:22:04.694543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.694551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.694557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf07690) 00:23:19.813 [2024-12-06 19:22:04.694567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.813 [2024-12-06 19:22:04.694586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69100, cid 0, qid 0 00:23:19.813 [2024-12-06 19:22:04.694658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.813 [2024-12-06 19:22:04.694671] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.813 [2024-12-06 19:22:04.694677] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.694683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69100) on tqpair=0xf07690 00:23:19.813 [2024-12-06 19:22:04.694692] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:19.813 [2024-12-06 19:22:04.694704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:19.813 [2024-12-06 19:22:04.694717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:19.813 [2024-12-06 19:22:04.694852] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:19.813 [2024-12-06 19:22:04.694861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:19.813 [2024-12-06 19:22:04.694883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.694890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.694896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf07690) 00:23:19.813 [2024-12-06 19:22:04.694906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.813 [2024-12-06 19:22:04.694928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69100, cid 0, qid 0 00:23:19.813 [2024-12-06 19:22:04.695128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.813 [2024-12-06 19:22:04.695140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.813 [2024-12-06 19:22:04.695146] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.695152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69100) on tqpair=0xf07690 00:23:19.813 [2024-12-06 19:22:04.695160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:19.813 [2024-12-06 19:22:04.695175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.695183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.695189] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf07690) 00:23:19.813 [2024-12-06 19:22:04.695199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.813 [2024-12-06 19:22:04.695219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69100, cid 0, qid 0 00:23:19.813 [2024-12-06 19:22:04.695294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.813 [2024-12-06 19:22:04.695308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.813 [2024-12-06 19:22:04.695314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.695320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69100) on tqpair=0xf07690 00:23:19.813 [2024-12-06 19:22:04.695327] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:19.813 [2024-12-06 19:22:04.695335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:19.813 [2024-12-06 19:22:04.695348] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:19.813 [2024-12-06 19:22:04.695364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:19.813 [2024-12-06 19:22:04.695380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.695387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf07690) 00:23:19.813 [2024-12-06 19:22:04.695397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.813 [2024-12-06 19:22:04.695418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69100, cid 0, qid 0 00:23:19.813 [2024-12-06 19:22:04.695533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:19.813 [2024-12-06 19:22:04.695547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:19.813 [2024-12-06 19:22:04.695554] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.695560] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf07690): datao=0, datal=4096, cccid=0 00:23:19.813 [2024-12-06 19:22:04.695567] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf69100) on tqpair(0xf07690): expected_datao=0, payload_size=4096 00:23:19.813 [2024-12-06 19:22:04.695575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.695585] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.695593] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.695604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.813 [2024-12-06 19:22:04.695613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.813 [2024-12-06 19:22:04.695619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.813 [2024-12-06 19:22:04.695626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69100) on tqpair=0xf07690 00:23:19.813 [2024-12-06 19:22:04.695638] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:19.814 [2024-12-06 19:22:04.695651] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:19.814 [2024-12-06 19:22:04.695659] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:19.814 [2024-12-06 19:22:04.695668] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:19.814 [2024-12-06 19:22:04.695675] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:19.814 [2024-12-06 19:22:04.695682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:19.814 [2024-12-06 19:22:04.695696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:19.814 [2024-12-06 19:22:04.695730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.695740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.695746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf07690) 00:23:19.814 [2024-12-06 19:22:04.695756] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:19.814 [2024-12-06 19:22:04.695777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69100, cid 0, qid 0 00:23:19.814 [2024-12-06 19:22:04.695880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.814 [2024-12-06 19:22:04.695892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.814 [2024-12-06 19:22:04.695898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.695905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69100) on tqpair=0xf07690 00:23:19.814 [2024-12-06 19:22:04.695916] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.695924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.695930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf07690) 00:23:19.814 [2024-12-06 19:22:04.695939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.814 [2024-12-06 19:22:04.695949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.695955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.695965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf07690) 00:23:19.814 [2024-12-06 19:22:04.695974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.814 [2024-12-06 19:22:04.695984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.695990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.695996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf07690) 00:23:19.814 [2024-12-06 19:22:04.696004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.814 [2024-12-06 19:22:04.696028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.696035] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.696040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.814 [2024-12-06 19:22:04.696049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.814 [2024-12-06 19:22:04.696057] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:19.814 [2024-12-06 19:22:04.696080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:19.814 [2024-12-06 19:22:04.696092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.696099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf07690) 00:23:19.814 [2024-12-06 19:22:04.696108] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.814 [2024-12-06 19:22:04.696130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69100, cid 0, qid 0 00:23:19.814 [2024-12-06 19:22:04.696140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69280, cid 1, qid 0 00:23:19.814 [2024-12-06 19:22:04.696147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69400, cid 2, qid 0 00:23:19.814 [2024-12-06 19:22:04.696154] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.814 [2024-12-06 19:22:04.696161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69700, cid 4, qid 0 00:23:19.814 [2024-12-06 19:22:04.696322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.814 [2024-12-06 19:22:04.696336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.814 [2024-12-06 19:22:04.696342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.696348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69700) on tqpair=0xf07690 00:23:19.814 [2024-12-06 19:22:04.696357] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:19.814 [2024-12-06 19:22:04.696365] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:19.814 [2024-12-06 19:22:04.696382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.696391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf07690) 00:23:19.814 [2024-12-06 19:22:04.696401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.814 [2024-12-06 19:22:04.696421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69700, cid 4, qid 0 00:23:19.814 [2024-12-06 19:22:04.696534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:19.814 [2024-12-06 19:22:04.696545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:19.814 [2024-12-06 19:22:04.696551] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.696561] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf07690): datao=0, datal=4096, cccid=4 00:23:19.814 [2024-12-06 19:22:04.696568] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf69700) on tqpair(0xf07690): expected_datao=0, payload_size=4096 00:23:19.814 [2024-12-06 19:22:04.696575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.696584] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.696591] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.696602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.814 [2024-12-06 19:22:04.696611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.814 [2024-12-06 19:22:04.696617] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.696623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69700) on tqpair=0xf07690 00:23:19.814 [2024-12-06 19:22:04.696641] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:19.814 [2024-12-06 19:22:04.696678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.696688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf07690) 00:23:19.814 [2024-12-06 19:22:04.696698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.814 [2024-12-06 19:22:04.696732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.696741] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.696747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf07690) 00:23:19.814 [2024-12-06 19:22:04.696756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.814 [2024-12-06 19:22:04.696782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69700, cid 4, qid 0 00:23:19.814 [2024-12-06 19:22:04.696794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69880, cid 5, qid 0 00:23:19.814 [2024-12-06 19:22:04.696946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:19.814 [2024-12-06 19:22:04.696958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:19.814 [2024-12-06 19:22:04.696965] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.696971] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf07690): datao=0, datal=1024, cccid=4 00:23:19.814 [2024-12-06 19:22:04.696978] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf69700) on tqpair(0xf07690): expected_datao=0, payload_size=1024 00:23:19.814 [2024-12-06 19:22:04.696985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.696994] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.697016] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:19.814 [2024-12-06 19:22:04.697024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.815 [2024-12-06 19:22:04.697033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.815 [2024-12-06 19:22:04.697039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.815 [2024-12-06 19:22:04.697045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69880) on tqpair=0xf07690 00:23:19.815 [2024-12-06 19:22:04.737860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.815 [2024-12-06 19:22:04.737879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.815 [2024-12-06 19:22:04.737887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.815 [2024-12-06 19:22:04.737893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69700) on tqpair=0xf07690 00:23:19.815 [2024-12-06 19:22:04.737911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.815 [2024-12-06 19:22:04.737920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf07690) 00:23:19.815 [2024-12-06 19:22:04.737935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.815 [2024-12-06 19:22:04.737966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69700, cid 4, qid 0 00:23:19.815 [2024-12-06 19:22:04.738076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:19.815 [2024-12-06 19:22:04.738088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:19.815 [2024-12-06 19:22:04.738095] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:19.815 [2024-12-06 19:22:04.738101] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf07690): datao=0, datal=3072, cccid=4 00:23:19.815 [2024-12-06 19:22:04.738108] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf69700) on tqpair(0xf07690): expected_datao=0, payload_size=3072 00:23:19.815 [2024-12-06 19:22:04.738114] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.815 [2024-12-06 19:22:04.738124] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:19.815 [2024-12-06 19:22:04.738130] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:19.815 [2024-12-06 19:22:04.738141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.815 [2024-12-06 19:22:04.738150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.815 [2024-12-06 19:22:04.738156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.815 [2024-12-06 19:22:04.738162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69700) on tqpair=0xf07690 00:23:19.815 [2024-12-06 19:22:04.738177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.815 [2024-12-06 19:22:04.738185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf07690) 00:23:19.815 [2024-12-06 19:22:04.738195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.815 [2024-12-06 19:22:04.738221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69700, cid 4, qid 0 00:23:19.815 [2024-12-06 19:22:04.738320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:19.815 [2024-12-06 19:22:04.738334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:19.815 [2024-12-06 19:22:04.738340] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:19.815 [2024-12-06 19:22:04.738346] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf07690): datao=0, datal=8, cccid=4 00:23:19.815 [2024-12-06 19:22:04.738353] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf69700) on tqpair(0xf07690): expected_datao=0, payload_size=8 00:23:19.815 [2024-12-06 19:22:04.738360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.815 [2024-12-06 19:22:04.738369] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:19.815 [2024-12-06 19:22:04.738376] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:19.815 [2024-12-06 19:22:04.781741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.815 [2024-12-06 19:22:04.781759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.815 [2024-12-06 19:22:04.781766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.815 [2024-12-06 19:22:04.781773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69700) on tqpair=0xf07690 00:23:19.815 ===================================================== 00:23:19.815 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:19.815 ===================================================== 00:23:19.815 Controller Capabilities/Features 00:23:19.815 ================================ 00:23:19.815 Vendor ID: 0000 00:23:19.815 Subsystem Vendor ID: 0000 00:23:19.815 Serial Number: .................... 00:23:19.815 Model Number: ........................................ 00:23:19.815 Firmware Version: 25.01 00:23:19.815 Recommended Arb Burst: 0 00:23:19.815 IEEE OUI Identifier: 00 00 00 00:23:19.815 Multi-path I/O 00:23:19.815 May have multiple subsystem ports: No 00:23:19.815 May have multiple controllers: No 00:23:19.815 Associated with SR-IOV VF: No 00:23:19.815 Max Data Transfer Size: 131072 00:23:19.815 Max Number of Namespaces: 0 00:23:19.815 Max Number of I/O Queues: 1024 00:23:19.815 NVMe Specification Version (VS): 1.3 00:23:19.815 NVMe Specification Version (Identify): 1.3 00:23:19.815 Maximum Queue Entries: 128 00:23:19.815 Contiguous Queues Required: Yes 00:23:19.815 Arbitration Mechanisms Supported 00:23:19.815 Weighted Round Robin: Not Supported 00:23:19.815 Vendor Specific: Not Supported 00:23:19.815 Reset Timeout: 15000 ms 00:23:19.815 Doorbell Stride: 4 bytes 00:23:19.815 NVM Subsystem Reset: Not Supported 00:23:19.815 Command Sets Supported 00:23:19.815 NVM Command Set: Supported 00:23:19.815 Boot Partition: Not Supported 00:23:19.815 Memory Page Size Minimum: 4096 bytes 00:23:19.815 Memory Page Size Maximum: 4096 bytes 00:23:19.815 Persistent Memory Region: Not Supported 00:23:19.815 Optional Asynchronous Events Supported 00:23:19.815 Namespace Attribute Notices: Not Supported 00:23:19.815 Firmware Activation Notices: Not Supported 00:23:19.815 ANA Change Notices: Not Supported 00:23:19.815 PLE Aggregate Log Change Notices: Not Supported 00:23:19.815 LBA Status Info Alert Notices: Not Supported 00:23:19.815 EGE Aggregate Log Change Notices: Not Supported 00:23:19.815 Normal NVM Subsystem Shutdown event: Not Supported 00:23:19.815 Zone Descriptor Change Notices: Not Supported 00:23:19.815 Discovery Log Change Notices: Supported 00:23:19.815 Controller Attributes 00:23:19.815 128-bit Host Identifier: Not Supported 00:23:19.815 Non-Operational Permissive Mode: Not Supported 00:23:19.815 NVM Sets: Not Supported 00:23:19.815 Read Recovery Levels: Not Supported 00:23:19.815 Endurance Groups: Not Supported 00:23:19.815 Predictable Latency Mode: Not Supported 00:23:19.815 Traffic Based Keep ALive: Not Supported 00:23:19.815 Namespace Granularity: Not Supported 00:23:19.815 SQ Associations: Not Supported 00:23:19.815 UUID List: Not Supported 00:23:19.815 Multi-Domain Subsystem: Not Supported 00:23:19.815 Fixed Capacity Management: Not Supported 00:23:19.815 Variable Capacity Management: Not Supported 00:23:19.815 Delete Endurance Group: Not Supported 00:23:19.815 Delete NVM Set: Not Supported 00:23:19.815 Extended LBA Formats Supported: Not Supported 00:23:19.815 Flexible Data Placement Supported: Not Supported 00:23:19.815 00:23:19.815 Controller Memory Buffer Support 00:23:19.815 ================================ 00:23:19.815 Supported: No 00:23:19.815 00:23:19.815 Persistent Memory Region Support 00:23:19.815 ================================ 00:23:19.815 Supported: No 00:23:19.815 00:23:19.815 Admin Command Set Attributes 00:23:19.815 ============================ 00:23:19.816 Security Send/Receive: Not Supported 00:23:19.816 Format NVM: Not Supported 00:23:19.816 Firmware Activate/Download: Not Supported 00:23:19.816 Namespace Management: Not Supported 00:23:19.816 Device Self-Test: Not Supported 00:23:19.816 Directives: Not Supported 00:23:19.816 NVMe-MI: Not Supported 00:23:19.816 Virtualization Management: Not Supported 00:23:19.816 Doorbell Buffer Config: Not Supported 00:23:19.816 Get LBA Status Capability: Not Supported 00:23:19.816 Command & Feature Lockdown Capability: Not Supported 00:23:19.816 Abort Command Limit: 1 00:23:19.816 Async Event Request Limit: 4 00:23:19.816 Number of Firmware Slots: N/A 00:23:19.816 Firmware Slot 1 Read-Only: N/A 00:23:19.816 Firmware Activation Without Reset: N/A 00:23:19.816 Multiple Update Detection Support: N/A 00:23:19.816 Firmware Update Granularity: No Information Provided 00:23:19.816 Per-Namespace SMART Log: No 00:23:19.816 Asymmetric Namespace Access Log Page: Not Supported 00:23:19.816 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:19.816 Command Effects Log Page: Not Supported 00:23:19.816 Get Log Page Extended Data: Supported 00:23:19.816 Telemetry Log Pages: Not Supported 00:23:19.816 Persistent Event Log Pages: Not Supported 00:23:19.816 Supported Log Pages Log Page: May Support 00:23:19.816 Commands Supported & Effects Log Page: Not Supported 00:23:19.816 Feature Identifiers & Effects Log Page:May Support 00:23:19.816 NVMe-MI Commands & Effects Log Page: May Support 00:23:19.816 Data Area 4 for Telemetry Log: Not Supported 00:23:19.816 Error Log Page Entries Supported: 128 00:23:19.816 Keep Alive: Not Supported 00:23:19.816 00:23:19.816 NVM Command Set Attributes 00:23:19.816 ========================== 00:23:19.816 Submission Queue Entry Size 00:23:19.816 Max: 1 00:23:19.816 Min: 1 00:23:19.816 Completion Queue Entry Size 00:23:19.816 Max: 1 00:23:19.816 Min: 1 00:23:19.816 Number of Namespaces: 0 00:23:19.816 Compare Command: Not Supported 00:23:19.816 Write Uncorrectable Command: Not Supported 00:23:19.816 Dataset Management Command: Not Supported 00:23:19.816 Write Zeroes Command: Not Supported 00:23:19.816 Set Features Save Field: Not Supported 00:23:19.816 Reservations: Not Supported 00:23:19.816 Timestamp: Not Supported 00:23:19.816 Copy: Not Supported 00:23:19.816 Volatile Write Cache: Not Present 00:23:19.816 Atomic Write Unit (Normal): 1 00:23:19.816 Atomic Write Unit (PFail): 1 00:23:19.816 Atomic Compare & Write Unit: 1 00:23:19.816 Fused Compare & Write: Supported 00:23:19.816 Scatter-Gather List 00:23:19.816 SGL Command Set: Supported 00:23:19.816 SGL Keyed: Supported 00:23:19.816 SGL Bit Bucket Descriptor: Not Supported 00:23:19.816 SGL Metadata Pointer: Not Supported 00:23:19.816 Oversized SGL: Not Supported 00:23:19.816 SGL Metadata Address: Not Supported 00:23:19.816 SGL Offset: Supported 00:23:19.816 Transport SGL Data Block: Not Supported 00:23:19.816 Replay Protected Memory Block: Not Supported 00:23:19.816 00:23:19.816 Firmware Slot Information 00:23:19.816 ========================= 00:23:19.816 Active slot: 0 00:23:19.816 00:23:19.816 00:23:19.816 Error Log 00:23:19.816 ========= 00:23:19.816 00:23:19.816 Active Namespaces 00:23:19.816 ================= 00:23:19.816 Discovery Log Page 00:23:19.816 ================== 00:23:19.816 Generation Counter: 2 00:23:19.816 Number of Records: 2 00:23:19.816 Record Format: 0 00:23:19.816 00:23:19.816 Discovery Log Entry 0 00:23:19.816 ---------------------- 00:23:19.816 Transport Type: 3 (TCP) 00:23:19.816 Address Family: 1 (IPv4) 00:23:19.816 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:19.816 Entry Flags: 00:23:19.816 Duplicate Returned Information: 1 00:23:19.816 Explicit Persistent Connection Support for Discovery: 1 00:23:19.816 Transport Requirements: 00:23:19.816 Secure Channel: Not Required 00:23:19.816 Port ID: 0 (0x0000) 00:23:19.816 Controller ID: 65535 (0xffff) 00:23:19.816 Admin Max SQ Size: 128 00:23:19.816 Transport Service Identifier: 4420 00:23:19.816 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:19.816 Transport Address: 10.0.0.2 00:23:19.816 Discovery Log Entry 1 00:23:19.816 ---------------------- 00:23:19.816 Transport Type: 3 (TCP) 00:23:19.816 Address Family: 1 (IPv4) 00:23:19.816 Subsystem Type: 2 (NVM Subsystem) 00:23:19.816 Entry Flags: 00:23:19.816 Duplicate Returned Information: 0 00:23:19.816 Explicit Persistent Connection Support for Discovery: 0 00:23:19.816 Transport Requirements: 00:23:19.816 Secure Channel: Not Required 00:23:19.816 Port ID: 0 (0x0000) 00:23:19.816 Controller ID: 65535 (0xffff) 00:23:19.816 Admin Max SQ Size: 128 00:23:19.816 Transport Service Identifier: 4420 00:23:19.816 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:19.816 Transport Address: 10.0.0.2 [2024-12-06 19:22:04.781898] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:19.816 [2024-12-06 19:22:04.781921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69100) on tqpair=0xf07690 00:23:19.816 [2024-12-06 19:22:04.781933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.816 [2024-12-06 19:22:04.781942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69280) on tqpair=0xf07690 00:23:19.816 [2024-12-06 19:22:04.781949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.816 [2024-12-06 19:22:04.781960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69400) on tqpair=0xf07690 00:23:19.816 [2024-12-06 19:22:04.781968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.816 [2024-12-06 19:22:04.781976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.816 [2024-12-06 19:22:04.781983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.816 [2024-12-06 19:22:04.782000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.816 [2024-12-06 19:22:04.782009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.816 [2024-12-06 19:22:04.782015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.817 [2024-12-06 19:22:04.782025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.817 [2024-12-06 19:22:04.782069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.817 [2024-12-06 19:22:04.782241] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.817 [2024-12-06 19:22:04.782253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.817 [2024-12-06 19:22:04.782259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.782265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.817 [2024-12-06 19:22:04.782277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.782284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.782290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.817 [2024-12-06 19:22:04.782300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.817 [2024-12-06 19:22:04.782325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.817 [2024-12-06 19:22:04.782417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.817 [2024-12-06 19:22:04.782430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.817 [2024-12-06 19:22:04.782437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.782443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.817 [2024-12-06 19:22:04.782451] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:19.817 [2024-12-06 19:22:04.782458] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:19.817 [2024-12-06 19:22:04.782474] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.782483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.782489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.817 [2024-12-06 19:22:04.782499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.817 [2024-12-06 19:22:04.782519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.817 [2024-12-06 19:22:04.782592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.817 [2024-12-06 19:22:04.782606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.817 [2024-12-06 19:22:04.782612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.782618] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.817 [2024-12-06 19:22:04.782635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.782644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.782654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.817 [2024-12-06 19:22:04.782665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.817 [2024-12-06 19:22:04.782685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.817 [2024-12-06 19:22:04.782785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.817 [2024-12-06 19:22:04.782799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.817 [2024-12-06 19:22:04.782806] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.782813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.817 [2024-12-06 19:22:04.782828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.782838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.782844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.817 [2024-12-06 19:22:04.782853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.817 [2024-12-06 19:22:04.782875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.817 [2024-12-06 19:22:04.782949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.817 [2024-12-06 19:22:04.782961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.817 [2024-12-06 19:22:04.782967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.782974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.817 [2024-12-06 19:22:04.782989] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.783012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.783018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.817 [2024-12-06 19:22:04.783028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.817 [2024-12-06 19:22:04.783048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.817 [2024-12-06 19:22:04.783133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.817 [2024-12-06 19:22:04.783147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.817 [2024-12-06 19:22:04.783153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.783159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.817 [2024-12-06 19:22:04.783175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.783184] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.783189] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.817 [2024-12-06 19:22:04.783199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.817 [2024-12-06 19:22:04.783219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.817 [2024-12-06 19:22:04.783293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.817 [2024-12-06 19:22:04.783307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.817 [2024-12-06 19:22:04.783313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.783320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.817 [2024-12-06 19:22:04.783335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.783344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.783350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.817 [2024-12-06 19:22:04.783363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.817 [2024-12-06 19:22:04.783384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.817 [2024-12-06 19:22:04.783454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.817 [2024-12-06 19:22:04.783465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.817 [2024-12-06 19:22:04.783472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.783478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.817 [2024-12-06 19:22:04.783493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.783501] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.783507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.817 [2024-12-06 19:22:04.783517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.817 [2024-12-06 19:22:04.783536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.817 [2024-12-06 19:22:04.783612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.817 [2024-12-06 19:22:04.783625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.817 [2024-12-06 19:22:04.783632] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.783638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.817 [2024-12-06 19:22:04.783653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.783663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.817 [2024-12-06 19:22:04.783668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.818 [2024-12-06 19:22:04.783678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.818 [2024-12-06 19:22:04.783712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.818 [2024-12-06 19:22:04.783791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.818 [2024-12-06 19:22:04.783804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.818 [2024-12-06 19:22:04.783810] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.783816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.818 [2024-12-06 19:22:04.783832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.783841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.783847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.818 [2024-12-06 19:22:04.783857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.818 [2024-12-06 19:22:04.783877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.818 [2024-12-06 19:22:04.783954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.818 [2024-12-06 19:22:04.783967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.818 [2024-12-06 19:22:04.783974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.783980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.818 [2024-12-06 19:22:04.784011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.784020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.784026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.818 [2024-12-06 19:22:04.784036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.818 [2024-12-06 19:22:04.784060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.818 [2024-12-06 19:22:04.784135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.818 [2024-12-06 19:22:04.784146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.818 [2024-12-06 19:22:04.784153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.784159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.818 [2024-12-06 19:22:04.784174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.784183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.784188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.818 [2024-12-06 19:22:04.784198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.818 [2024-12-06 19:22:04.784217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.818 [2024-12-06 19:22:04.784291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.818 [2024-12-06 19:22:04.784304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.818 [2024-12-06 19:22:04.784311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.784317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.818 [2024-12-06 19:22:04.784332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.784341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.784347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.818 [2024-12-06 19:22:04.784356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.818 [2024-12-06 19:22:04.784376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.818 [2024-12-06 19:22:04.784447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.818 [2024-12-06 19:22:04.784460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.818 [2024-12-06 19:22:04.784467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.784473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.818 [2024-12-06 19:22:04.784488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.784497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.784503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.818 [2024-12-06 19:22:04.784512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.818 [2024-12-06 19:22:04.784533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.818 [2024-12-06 19:22:04.784603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.818 [2024-12-06 19:22:04.784614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.818 [2024-12-06 19:22:04.784620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.784627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.818 [2024-12-06 19:22:04.784641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.784650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.784656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.818 [2024-12-06 19:22:04.784665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.818 [2024-12-06 19:22:04.784685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.818 [2024-12-06 19:22:04.784797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.818 [2024-12-06 19:22:04.784813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.818 [2024-12-06 19:22:04.784819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.784826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.818 [2024-12-06 19:22:04.784842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.784851] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.818 [2024-12-06 19:22:04.784857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.818 [2024-12-06 19:22:04.784867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.818 [2024-12-06 19:22:04.784888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.818 [2024-12-06 19:22:04.784961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.819 [2024-12-06 19:22:04.784975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.819 [2024-12-06 19:22:04.784981] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.819 [2024-12-06 19:22:04.784988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.819 [2024-12-06 19:22:04.785018] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.819 [2024-12-06 19:22:04.785028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.819 [2024-12-06 19:22:04.785034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.819 [2024-12-06 19:22:04.785043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.819 [2024-12-06 19:22:04.785064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.819 [2024-12-06 19:22:04.785157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.819 [2024-12-06 19:22:04.785170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.819 [2024-12-06 19:22:04.785177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.819 [2024-12-06 19:22:04.785183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.819 [2024-12-06 19:22:04.785198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.819 [2024-12-06 19:22:04.785207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.819 [2024-12-06 19:22:04.785213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.819 [2024-12-06 19:22:04.785222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.819 [2024-12-06 19:22:04.785243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.819 [2024-12-06 19:22:04.785316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.819 [2024-12-06 19:22:04.785329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.819 [2024-12-06 19:22:04.785335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.819 [2024-12-06 19:22:04.785342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.819 [2024-12-06 19:22:04.785357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.819 [2024-12-06 19:22:04.785366] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.819 [2024-12-06 19:22:04.785372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.819 [2024-12-06 19:22:04.785381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.819 [2024-12-06 19:22:04.785401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.819 [2024-12-06 19:22:04.785494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.819 [2024-12-06 19:22:04.785511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.819 [2024-12-06 19:22:04.785518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.819 [2024-12-06 19:22:04.785524] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.819 [2024-12-06 19:22:04.785538] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.819 [2024-12-06 19:22:04.785547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.819 [2024-12-06 19:22:04.785553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.819 [2024-12-06 19:22:04.785563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.819 [2024-12-06 19:22:04.785583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.819 [2024-12-06 19:22:04.785655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.819 [2024-12-06 19:22:04.785666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.819 [2024-12-06 19:22:04.785673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.819 [2024-12-06 19:22:04.785679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.819 [2024-12-06 19:22:04.785694] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:19.819 [2024-12-06 19:22:04.785703] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:19.819 [2024-12-06 19:22:04.785709] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf07690) 00:23:19.819 [2024-12-06 19:22:04.785718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.819 [2024-12-06 19:22:04.789754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69580, cid 3, qid 0 00:23:19.819 [2024-12-06 19:22:04.789882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:19.819 [2024-12-06 19:22:04.789895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:19.819 [2024-12-06 19:22:04.789902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:19.819 [2024-12-06 19:22:04.789908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf69580) on tqpair=0xf07690 00:23:19.819 [2024-12-06 19:22:04.789921] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:23:19.819 00:23:19.819 19:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:19.819 [2024-12-06 19:22:04.822348] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:23:19.819 [2024-12-06 19:22:04.822385] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276989 ] 00:23:20.079 [2024-12-06 19:22:04.876348] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:20.079 [2024-12-06 19:22:04.876402] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:20.079 [2024-12-06 19:22:04.876412] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:20.079 [2024-12-06 19:22:04.876427] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:20.079 [2024-12-06 19:22:04.876439] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:20.079 [2024-12-06 19:22:04.879993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:20.079 [2024-12-06 19:22:04.880050] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb49690 0 00:23:20.079 [2024-12-06 19:22:04.887735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:20.079 [2024-12-06 19:22:04.887755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:20.079 [2024-12-06 19:22:04.887763] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:20.079 [2024-12-06 19:22:04.887768] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:20.079 [2024-12-06 19:22:04.887807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.079 [2024-12-06 19:22:04.887818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.079 [2024-12-06 19:22:04.887825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb49690) 00:23:20.079 [2024-12-06 19:22:04.887837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:20.079 [2024-12-06 19:22:04.887864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab100, cid 0, qid 0 00:23:20.079 [2024-12-06 19:22:04.894741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.079 [2024-12-06 19:22:04.894759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.080 [2024-12-06 19:22:04.894766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.894773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab100) on tqpair=0xb49690 00:23:20.080 [2024-12-06 19:22:04.894789] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:20.080 [2024-12-06 19:22:04.894800] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:20.080 [2024-12-06 19:22:04.894809] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:20.080 [2024-12-06 19:22:04.894827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.894835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.894842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb49690) 00:23:20.080 [2024-12-06 19:22:04.894852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.080 [2024-12-06 19:22:04.894876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab100, cid 0, qid 0 00:23:20.080 [2024-12-06 19:22:04.895019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.080 [2024-12-06 19:22:04.895032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.080 [2024-12-06 19:22:04.895039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.895045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab100) on tqpair=0xb49690 00:23:20.080 [2024-12-06 19:22:04.895053] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:20.080 [2024-12-06 19:22:04.895066] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:20.080 [2024-12-06 19:22:04.895078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.895085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.895091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb49690) 00:23:20.080 [2024-12-06 19:22:04.895115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.080 [2024-12-06 19:22:04.895137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab100, cid 0, qid 0 00:23:20.080 [2024-12-06 19:22:04.895225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.080 [2024-12-06 19:22:04.895238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.080 [2024-12-06 19:22:04.895248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.895255] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab100) on tqpair=0xb49690 00:23:20.080 [2024-12-06 19:22:04.895263] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:20.080 [2024-12-06 19:22:04.895277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:20.080 [2024-12-06 19:22:04.895289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.895296] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.895301] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb49690) 00:23:20.080 [2024-12-06 19:22:04.895311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.080 [2024-12-06 19:22:04.895332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab100, cid 0, qid 0 00:23:20.080 [2024-12-06 19:22:04.895409] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.080 [2024-12-06 19:22:04.895423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.080 [2024-12-06 19:22:04.895429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.895435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab100) on tqpair=0xb49690 00:23:20.080 [2024-12-06 19:22:04.895443] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:20.080 [2024-12-06 19:22:04.895459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.895467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.895473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb49690) 00:23:20.080 [2024-12-06 19:22:04.895483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.080 [2024-12-06 19:22:04.895503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab100, cid 0, qid 0 00:23:20.080 [2024-12-06 19:22:04.895577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.080 [2024-12-06 19:22:04.895590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.080 [2024-12-06 19:22:04.895596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.895602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab100) on tqpair=0xb49690 00:23:20.080 [2024-12-06 19:22:04.895609] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:20.080 [2024-12-06 19:22:04.895617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:20.080 [2024-12-06 19:22:04.895630] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:20.080 [2024-12-06 19:22:04.895740] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:20.080 [2024-12-06 19:22:04.895750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:20.080 [2024-12-06 19:22:04.895762] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.895769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.895775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb49690) 00:23:20.080 [2024-12-06 19:22:04.895785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.080 [2024-12-06 19:22:04.895807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab100, cid 0, qid 0 00:23:20.080 [2024-12-06 19:22:04.895965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.080 [2024-12-06 19:22:04.895980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.080 [2024-12-06 19:22:04.895986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.895993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab100) on tqpair=0xb49690 00:23:20.080 [2024-12-06 19:22:04.896015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:20.080 [2024-12-06 19:22:04.896032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.896040] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.896046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb49690) 00:23:20.080 [2024-12-06 19:22:04.896056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.080 [2024-12-06 19:22:04.896076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab100, cid 0, qid 0 00:23:20.080 [2024-12-06 19:22:04.896150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.080 [2024-12-06 19:22:04.896163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.080 [2024-12-06 19:22:04.896169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.896176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab100) on tqpair=0xb49690 00:23:20.080 [2024-12-06 19:22:04.896182] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:20.080 [2024-12-06 19:22:04.896190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:20.080 [2024-12-06 19:22:04.896203] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:20.080 [2024-12-06 19:22:04.896216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:20.080 [2024-12-06 19:22:04.896229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.896236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb49690) 00:23:20.080 [2024-12-06 19:22:04.896246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.080 [2024-12-06 19:22:04.896267] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab100, cid 0, qid 0 00:23:20.080 [2024-12-06 19:22:04.896388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.080 [2024-12-06 19:22:04.896399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.080 [2024-12-06 19:22:04.896406] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.896411] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb49690): datao=0, datal=4096, cccid=0 00:23:20.080 [2024-12-06 19:22:04.896418] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbab100) on tqpair(0xb49690): expected_datao=0, payload_size=4096 00:23:20.080 [2024-12-06 19:22:04.896425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.896441] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.896449] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.938747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.080 [2024-12-06 19:22:04.938766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.080 [2024-12-06 19:22:04.938774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.080 [2024-12-06 19:22:04.938780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab100) on tqpair=0xb49690 00:23:20.080 [2024-12-06 19:22:04.938791] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:20.080 [2024-12-06 19:22:04.938810] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:20.080 [2024-12-06 19:22:04.938819] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:20.080 [2024-12-06 19:22:04.938826] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:20.080 [2024-12-06 19:22:04.938833] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:20.080 [2024-12-06 19:22:04.938841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:20.080 [2024-12-06 19:22:04.938856] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:20.081 [2024-12-06 19:22:04.938868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.938875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.938881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb49690) 00:23:20.081 [2024-12-06 19:22:04.938892] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:20.081 [2024-12-06 19:22:04.938915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab100, cid 0, qid 0 00:23:20.081 [2024-12-06 19:22:04.939116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.081 [2024-12-06 19:22:04.939130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.081 [2024-12-06 19:22:04.939136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.939143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab100) on tqpair=0xb49690 00:23:20.081 [2024-12-06 19:22:04.939152] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.939159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.939165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb49690) 00:23:20.081 [2024-12-06 19:22:04.939174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.081 [2024-12-06 19:22:04.939184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.939190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.939196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb49690) 00:23:20.081 [2024-12-06 19:22:04.939204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.081 [2024-12-06 19:22:04.939213] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.939219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.939224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb49690) 00:23:20.081 [2024-12-06 19:22:04.939232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.081 [2024-12-06 19:22:04.939241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.939247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.939253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb49690) 00:23:20.081 [2024-12-06 19:22:04.939261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.081 [2024-12-06 19:22:04.939269] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:20.081 [2024-12-06 19:22:04.939287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:20.081 [2024-12-06 19:22:04.939302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.939309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb49690) 00:23:20.081 [2024-12-06 19:22:04.939319] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.081 [2024-12-06 19:22:04.939341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab100, cid 0, qid 0 00:23:20.081 [2024-12-06 19:22:04.939351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab280, cid 1, qid 0 00:23:20.081 [2024-12-06 19:22:04.939358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab400, cid 2, qid 0 00:23:20.081 [2024-12-06 19:22:04.939365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab580, cid 3, qid 0 00:23:20.081 [2024-12-06 19:22:04.939372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab700, cid 4, qid 0 00:23:20.081 [2024-12-06 19:22:04.939540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.081 [2024-12-06 19:22:04.939554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.081 [2024-12-06 19:22:04.939560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.939566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab700) on tqpair=0xb49690 00:23:20.081 [2024-12-06 19:22:04.939573] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:20.081 [2024-12-06 19:22:04.939582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:20.081 [2024-12-06 19:22:04.939595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:20.081 [2024-12-06 19:22:04.939606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:20.081 [2024-12-06 19:22:04.939616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.939623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.939628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb49690) 00:23:20.081 [2024-12-06 19:22:04.939638] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:20.081 [2024-12-06 19:22:04.939658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab700, cid 4, qid 0 00:23:20.081 [2024-12-06 19:22:04.939856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.081 [2024-12-06 19:22:04.939871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.081 [2024-12-06 19:22:04.939878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.939884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab700) on tqpair=0xb49690 00:23:20.081 [2024-12-06 19:22:04.939957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:20.081 [2024-12-06 19:22:04.939978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:20.081 [2024-12-06 19:22:04.939992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.940000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb49690) 00:23:20.081 [2024-12-06 19:22:04.940010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.081 [2024-12-06 19:22:04.940046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab700, cid 4, qid 0 00:23:20.081 [2024-12-06 19:22:04.940238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.081 [2024-12-06 19:22:04.940253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.081 [2024-12-06 19:22:04.940259] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.940265] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb49690): datao=0, datal=4096, cccid=4 00:23:20.081 [2024-12-06 19:22:04.940272] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbab700) on tqpair(0xb49690): expected_datao=0, payload_size=4096 00:23:20.081 [2024-12-06 19:22:04.940279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.940296] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.940304] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.981839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.081 [2024-12-06 19:22:04.981857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.081 [2024-12-06 19:22:04.981864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.981870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab700) on tqpair=0xb49690 00:23:20.081 [2024-12-06 19:22:04.981890] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:20.081 [2024-12-06 19:22:04.981909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:20.081 [2024-12-06 19:22:04.981928] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:20.081 [2024-12-06 19:22:04.981941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.981948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb49690) 00:23:20.081 [2024-12-06 19:22:04.981959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.081 [2024-12-06 19:22:04.981982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab700, cid 4, qid 0 00:23:20.081 [2024-12-06 19:22:04.982124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.081 [2024-12-06 19:22:04.982136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.081 [2024-12-06 19:22:04.982143] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.982148] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb49690): datao=0, datal=4096, cccid=4 00:23:20.081 [2024-12-06 19:22:04.982155] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbab700) on tqpair(0xb49690): expected_datao=0, payload_size=4096 00:23:20.081 [2024-12-06 19:22:04.982162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.982171] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.982178] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.982234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.081 [2024-12-06 19:22:04.982245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.081 [2024-12-06 19:22:04.982251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.982257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab700) on tqpair=0xb49690 00:23:20.081 [2024-12-06 19:22:04.982280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:20.081 [2024-12-06 19:22:04.982298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:20.081 [2024-12-06 19:22:04.982311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.982319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb49690) 00:23:20.081 [2024-12-06 19:22:04.982332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.081 [2024-12-06 19:22:04.982353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab700, cid 4, qid 0 00:23:20.081 [2024-12-06 19:22:04.982469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.081 [2024-12-06 19:22:04.982483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.081 [2024-12-06 19:22:04.982489] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.081 [2024-12-06 19:22:04.982495] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb49690): datao=0, datal=4096, cccid=4 00:23:20.082 [2024-12-06 19:22:04.982502] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbab700) on tqpair(0xb49690): expected_datao=0, payload_size=4096 00:23:20.082 [2024-12-06 19:22:04.982509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:04.982525] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:04.982533] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.026737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.082 [2024-12-06 19:22:05.026755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.082 [2024-12-06 19:22:05.026762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.026769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab700) on tqpair=0xb49690 00:23:20.082 [2024-12-06 19:22:05.026783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:20.082 [2024-12-06 19:22:05.026798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:20.082 [2024-12-06 19:22:05.026813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:20.082 [2024-12-06 19:22:05.026827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:20.082 [2024-12-06 19:22:05.026844] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:20.082 [2024-12-06 19:22:05.026852] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:20.082 [2024-12-06 19:22:05.026860] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:20.082 [2024-12-06 19:22:05.026868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:20.082 [2024-12-06 19:22:05.026876] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:20.082 [2024-12-06 19:22:05.026894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.026902] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb49690) 00:23:20.082 [2024-12-06 19:22:05.026913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.082 [2024-12-06 19:22:05.026924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.026931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.026936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb49690) 00:23:20.082 [2024-12-06 19:22:05.026945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.082 [2024-12-06 19:22:05.026971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab700, cid 4, qid 0 00:23:20.082 [2024-12-06 19:22:05.026986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab880, cid 5, qid 0 00:23:20.082 [2024-12-06 19:22:05.027086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.082 [2024-12-06 19:22:05.027101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.082 [2024-12-06 19:22:05.027107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.027113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab700) on tqpair=0xb49690 00:23:20.082 [2024-12-06 19:22:05.027123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.082 [2024-12-06 19:22:05.027132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.082 [2024-12-06 19:22:05.027138] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.027144] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab880) on tqpair=0xb49690 00:23:20.082 [2024-12-06 19:22:05.027159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.027167] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb49690) 00:23:20.082 [2024-12-06 19:22:05.027177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.082 [2024-12-06 19:22:05.027197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab880, cid 5, qid 0 00:23:20.082 [2024-12-06 19:22:05.027277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.082 [2024-12-06 19:22:05.027288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.082 [2024-12-06 19:22:05.027295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.027301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab880) on tqpair=0xb49690 00:23:20.082 [2024-12-06 19:22:05.027315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.027323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb49690) 00:23:20.082 [2024-12-06 19:22:05.027332] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.082 [2024-12-06 19:22:05.027351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab880, cid 5, qid 0 00:23:20.082 [2024-12-06 19:22:05.027428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.082 [2024-12-06 19:22:05.027439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.082 [2024-12-06 19:22:05.027445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.027451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab880) on tqpair=0xb49690 00:23:20.082 [2024-12-06 19:22:05.027472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.027480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb49690) 00:23:20.082 [2024-12-06 19:22:05.027490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.082 [2024-12-06 19:22:05.027509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab880, cid 5, qid 0 00:23:20.082 [2024-12-06 19:22:05.027586] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.082 [2024-12-06 19:22:05.027599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.082 [2024-12-06 19:22:05.027605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.027611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab880) on tqpair=0xb49690 00:23:20.082 [2024-12-06 19:22:05.027634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.027644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb49690) 00:23:20.082 [2024-12-06 19:22:05.027654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.082 [2024-12-06 19:22:05.027671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.027679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb49690) 00:23:20.082 [2024-12-06 19:22:05.027688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.082 [2024-12-06 19:22:05.027699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.027729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb49690) 00:23:20.082 [2024-12-06 19:22:05.027739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.082 [2024-12-06 19:22:05.027752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.027759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb49690) 00:23:20.082 [2024-12-06 19:22:05.027768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.082 [2024-12-06 19:22:05.027799] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab880, cid 5, qid 0 00:23:20.082 [2024-12-06 19:22:05.027810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab700, cid 4, qid 0 00:23:20.082 [2024-12-06 19:22:05.027817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbaba00, cid 6, qid 0 00:23:20.082 [2024-12-06 19:22:05.027825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabb80, cid 7, qid 0 00:23:20.082 [2024-12-06 19:22:05.028098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.082 [2024-12-06 19:22:05.028113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.082 [2024-12-06 19:22:05.028119] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.028125] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb49690): datao=0, datal=8192, cccid=5 00:23:20.082 [2024-12-06 19:22:05.028132] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbab880) on tqpair(0xb49690): expected_datao=0, payload_size=8192 00:23:20.082 [2024-12-06 19:22:05.028139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.028161] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.028170] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.028178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.082 [2024-12-06 19:22:05.028187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.082 [2024-12-06 19:22:05.028193] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.028198] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb49690): datao=0, datal=512, cccid=4 00:23:20.082 [2024-12-06 19:22:05.028205] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbab700) on tqpair(0xb49690): expected_datao=0, payload_size=512 00:23:20.082 [2024-12-06 19:22:05.028211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.028220] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.028226] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.028233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.082 [2024-12-06 19:22:05.028242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.082 [2024-12-06 19:22:05.028248] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.028253] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb49690): datao=0, datal=512, cccid=6 00:23:20.082 [2024-12-06 19:22:05.028260] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbaba00) on tqpair(0xb49690): expected_datao=0, payload_size=512 00:23:20.082 [2024-12-06 19:22:05.028267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.028278] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.028285] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.082 [2024-12-06 19:22:05.028293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.082 [2024-12-06 19:22:05.028301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.082 [2024-12-06 19:22:05.028307] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.083 [2024-12-06 19:22:05.028313] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb49690): datao=0, datal=4096, cccid=7 00:23:20.083 [2024-12-06 19:22:05.028320] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbabb80) on tqpair(0xb49690): expected_datao=0, payload_size=4096 00:23:20.083 [2024-12-06 19:22:05.028326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.083 [2024-12-06 19:22:05.028335] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.083 [2024-12-06 19:22:05.028341] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.083 [2024-12-06 19:22:05.028352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.083 [2024-12-06 19:22:05.028361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.083 [2024-12-06 19:22:05.028367] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.083 [2024-12-06 19:22:05.028373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab880) on tqpair=0xb49690 00:23:20.083 [2024-12-06 19:22:05.028391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.083 [2024-12-06 19:22:05.028401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.083 [2024-12-06 19:22:05.028407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.083 [2024-12-06 19:22:05.028413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab700) on tqpair=0xb49690 00:23:20.083 [2024-12-06 19:22:05.028428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.083 [2024-12-06 19:22:05.028438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.083 [2024-12-06 19:22:05.028444] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.083 [2024-12-06 19:22:05.028450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbaba00) on tqpair=0xb49690 00:23:20.083 [2024-12-06 19:22:05.028460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.083 [2024-12-06 19:22:05.028469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.083 [2024-12-06 19:22:05.028475] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.083 [2024-12-06 19:22:05.028481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbabb80) on tqpair=0xb49690 00:23:20.083 ===================================================== 00:23:20.083 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:20.083 ===================================================== 00:23:20.083 Controller Capabilities/Features 00:23:20.083 ================================ 00:23:20.083 Vendor ID: 8086 00:23:20.083 Subsystem Vendor ID: 8086 00:23:20.083 Serial Number: SPDK00000000000001 00:23:20.083 Model Number: SPDK bdev Controller 00:23:20.083 Firmware Version: 25.01 00:23:20.083 Recommended Arb Burst: 6 00:23:20.083 IEEE OUI Identifier: e4 d2 5c 00:23:20.083 Multi-path I/O 00:23:20.083 May have multiple subsystem ports: Yes 00:23:20.083 May have multiple controllers: Yes 00:23:20.083 Associated with SR-IOV VF: No 00:23:20.083 Max Data Transfer Size: 131072 00:23:20.083 Max Number of Namespaces: 32 00:23:20.083 Max Number of I/O Queues: 127 00:23:20.083 NVMe Specification Version (VS): 1.3 00:23:20.083 NVMe Specification Version (Identify): 1.3 00:23:20.083 Maximum Queue Entries: 128 00:23:20.083 Contiguous Queues Required: Yes 00:23:20.083 Arbitration Mechanisms Supported 00:23:20.083 Weighted Round Robin: Not Supported 00:23:20.083 Vendor Specific: Not Supported 00:23:20.083 Reset Timeout: 15000 ms 00:23:20.083 Doorbell Stride: 4 bytes 00:23:20.083 NVM Subsystem Reset: Not Supported 00:23:20.083 Command Sets Supported 00:23:20.083 NVM Command Set: Supported 00:23:20.083 Boot Partition: Not Supported 00:23:20.083 Memory Page Size Minimum: 4096 bytes 00:23:20.083 Memory Page Size Maximum: 4096 bytes 00:23:20.083 Persistent Memory Region: Not Supported 00:23:20.083 Optional Asynchronous Events Supported 00:23:20.083 Namespace Attribute Notices: Supported 00:23:20.083 Firmware Activation Notices: Not Supported 00:23:20.083 ANA Change Notices: Not Supported 00:23:20.083 PLE Aggregate Log Change Notices: Not Supported 00:23:20.083 LBA Status Info Alert Notices: Not Supported 00:23:20.083 EGE Aggregate Log Change Notices: Not Supported 00:23:20.083 Normal NVM Subsystem Shutdown event: Not Supported 00:23:20.083 Zone Descriptor Change Notices: Not Supported 00:23:20.083 Discovery Log Change Notices: Not Supported 00:23:20.083 Controller Attributes 00:23:20.083 128-bit Host Identifier: Supported 00:23:20.083 Non-Operational Permissive Mode: Not Supported 00:23:20.083 NVM Sets: Not Supported 00:23:20.083 Read Recovery Levels: Not Supported 00:23:20.083 Endurance Groups: Not Supported 00:23:20.083 Predictable Latency Mode: Not Supported 00:23:20.083 Traffic Based Keep ALive: Not Supported 00:23:20.083 Namespace Granularity: Not Supported 00:23:20.083 SQ Associations: Not Supported 00:23:20.083 UUID List: Not Supported 00:23:20.083 Multi-Domain Subsystem: Not Supported 00:23:20.083 Fixed Capacity Management: Not Supported 00:23:20.083 Variable Capacity Management: Not Supported 00:23:20.083 Delete Endurance Group: Not Supported 00:23:20.083 Delete NVM Set: Not Supported 00:23:20.083 Extended LBA Formats Supported: Not Supported 00:23:20.083 Flexible Data Placement Supported: Not Supported 00:23:20.083 00:23:20.083 Controller Memory Buffer Support 00:23:20.083 ================================ 00:23:20.083 Supported: No 00:23:20.083 00:23:20.083 Persistent Memory Region Support 00:23:20.083 ================================ 00:23:20.083 Supported: No 00:23:20.083 00:23:20.083 Admin Command Set Attributes 00:23:20.083 ============================ 00:23:20.083 Security Send/Receive: Not Supported 00:23:20.083 Format NVM: Not Supported 00:23:20.083 Firmware Activate/Download: Not Supported 00:23:20.083 Namespace Management: Not Supported 00:23:20.083 Device Self-Test: Not Supported 00:23:20.083 Directives: Not Supported 00:23:20.083 NVMe-MI: Not Supported 00:23:20.083 Virtualization Management: Not Supported 00:23:20.083 Doorbell Buffer Config: Not Supported 00:23:20.083 Get LBA Status Capability: Not Supported 00:23:20.083 Command & Feature Lockdown Capability: Not Supported 00:23:20.083 Abort Command Limit: 4 00:23:20.083 Async Event Request Limit: 4 00:23:20.083 Number of Firmware Slots: N/A 00:23:20.083 Firmware Slot 1 Read-Only: N/A 00:23:20.083 Firmware Activation Without Reset: N/A 00:23:20.083 Multiple Update Detection Support: N/A 00:23:20.083 Firmware Update Granularity: No Information Provided 00:23:20.083 Per-Namespace SMART Log: No 00:23:20.083 Asymmetric Namespace Access Log Page: Not Supported 00:23:20.083 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:20.083 Command Effects Log Page: Supported 00:23:20.083 Get Log Page Extended Data: Supported 00:23:20.083 Telemetry Log Pages: Not Supported 00:23:20.083 Persistent Event Log Pages: Not Supported 00:23:20.083 Supported Log Pages Log Page: May Support 00:23:20.083 Commands Supported & Effects Log Page: Not Supported 00:23:20.083 Feature Identifiers & Effects Log Page:May Support 00:23:20.083 NVMe-MI Commands & Effects Log Page: May Support 00:23:20.083 Data Area 4 for Telemetry Log: Not Supported 00:23:20.083 Error Log Page Entries Supported: 128 00:23:20.083 Keep Alive: Supported 00:23:20.083 Keep Alive Granularity: 10000 ms 00:23:20.083 00:23:20.083 NVM Command Set Attributes 00:23:20.083 ========================== 00:23:20.083 Submission Queue Entry Size 00:23:20.083 Max: 64 00:23:20.083 Min: 64 00:23:20.083 Completion Queue Entry Size 00:23:20.083 Max: 16 00:23:20.083 Min: 16 00:23:20.083 Number of Namespaces: 32 00:23:20.083 Compare Command: Supported 00:23:20.083 Write Uncorrectable Command: Not Supported 00:23:20.083 Dataset Management Command: Supported 00:23:20.083 Write Zeroes Command: Supported 00:23:20.083 Set Features Save Field: Not Supported 00:23:20.083 Reservations: Supported 00:23:20.083 Timestamp: Not Supported 00:23:20.083 Copy: Supported 00:23:20.083 Volatile Write Cache: Present 00:23:20.083 Atomic Write Unit (Normal): 1 00:23:20.083 Atomic Write Unit (PFail): 1 00:23:20.083 Atomic Compare & Write Unit: 1 00:23:20.083 Fused Compare & Write: Supported 00:23:20.083 Scatter-Gather List 00:23:20.083 SGL Command Set: Supported 00:23:20.083 SGL Keyed: Supported 00:23:20.083 SGL Bit Bucket Descriptor: Not Supported 00:23:20.083 SGL Metadata Pointer: Not Supported 00:23:20.083 Oversized SGL: Not Supported 00:23:20.083 SGL Metadata Address: Not Supported 00:23:20.083 SGL Offset: Supported 00:23:20.083 Transport SGL Data Block: Not Supported 00:23:20.083 Replay Protected Memory Block: Not Supported 00:23:20.083 00:23:20.083 Firmware Slot Information 00:23:20.083 ========================= 00:23:20.083 Active slot: 1 00:23:20.083 Slot 1 Firmware Revision: 25.01 00:23:20.083 00:23:20.083 00:23:20.083 Commands Supported and Effects 00:23:20.083 ============================== 00:23:20.083 Admin Commands 00:23:20.083 -------------- 00:23:20.083 Get Log Page (02h): Supported 00:23:20.083 Identify (06h): Supported 00:23:20.083 Abort (08h): Supported 00:23:20.083 Set Features (09h): Supported 00:23:20.083 Get Features (0Ah): Supported 00:23:20.083 Asynchronous Event Request (0Ch): Supported 00:23:20.083 Keep Alive (18h): Supported 00:23:20.083 I/O Commands 00:23:20.083 ------------ 00:23:20.084 Flush (00h): Supported LBA-Change 00:23:20.084 Write (01h): Supported LBA-Change 00:23:20.084 Read (02h): Supported 00:23:20.084 Compare (05h): Supported 00:23:20.084 Write Zeroes (08h): Supported LBA-Change 00:23:20.084 Dataset Management (09h): Supported LBA-Change 00:23:20.084 Copy (19h): Supported LBA-Change 00:23:20.084 00:23:20.084 Error Log 00:23:20.084 ========= 00:23:20.084 00:23:20.084 Arbitration 00:23:20.084 =========== 00:23:20.084 Arbitration Burst: 1 00:23:20.084 00:23:20.084 Power Management 00:23:20.084 ================ 00:23:20.084 Number of Power States: 1 00:23:20.084 Current Power State: Power State #0 00:23:20.084 Power State #0: 00:23:20.084 Max Power: 0.00 W 00:23:20.084 Non-Operational State: Operational 00:23:20.084 Entry Latency: Not Reported 00:23:20.084 Exit Latency: Not Reported 00:23:20.084 Relative Read Throughput: 0 00:23:20.084 Relative Read Latency: 0 00:23:20.084 Relative Write Throughput: 0 00:23:20.084 Relative Write Latency: 0 00:23:20.084 Idle Power: Not Reported 00:23:20.084 Active Power: Not Reported 00:23:20.084 Non-Operational Permissive Mode: Not Supported 00:23:20.084 00:23:20.084 Health Information 00:23:20.084 ================== 00:23:20.084 Critical Warnings: 00:23:20.084 Available Spare Space: OK 00:23:20.084 Temperature: OK 00:23:20.084 Device Reliability: OK 00:23:20.084 Read Only: No 00:23:20.084 Volatile Memory Backup: OK 00:23:20.084 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:20.084 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:20.084 Available Spare: 0% 00:23:20.084 Available Spare Threshold: 0% 00:23:20.084 Life Percentage Used:[2024-12-06 19:22:05.028596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.028607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb49690) 00:23:20.084 [2024-12-06 19:22:05.028617] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.084 [2024-12-06 19:22:05.028638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabb80, cid 7, qid 0 00:23:20.084 [2024-12-06 19:22:05.028854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.084 [2024-12-06 19:22:05.028868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.084 [2024-12-06 19:22:05.028875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.028881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbabb80) on tqpair=0xb49690 00:23:20.084 [2024-12-06 19:22:05.028927] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:20.084 [2024-12-06 19:22:05.028946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab100) on tqpair=0xb49690 00:23:20.084 [2024-12-06 19:22:05.028956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.084 [2024-12-06 19:22:05.028964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab280) on tqpair=0xb49690 00:23:20.084 [2024-12-06 19:22:05.028977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.084 [2024-12-06 19:22:05.028986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab400) on tqpair=0xb49690 00:23:20.084 [2024-12-06 19:22:05.028993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.084 [2024-12-06 19:22:05.029000] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab580) on tqpair=0xb49690 00:23:20.084 [2024-12-06 19:22:05.029008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.084 [2024-12-06 19:22:05.029038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.029046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.029052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb49690) 00:23:20.084 [2024-12-06 19:22:05.029062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.084 [2024-12-06 19:22:05.029095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab580, cid 3, qid 0 00:23:20.084 [2024-12-06 19:22:05.029261] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.084 [2024-12-06 19:22:05.029273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.084 [2024-12-06 19:22:05.029279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.029285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab580) on tqpair=0xb49690 00:23:20.084 [2024-12-06 19:22:05.029295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.029302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.029308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb49690) 00:23:20.084 [2024-12-06 19:22:05.029318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.084 [2024-12-06 19:22:05.029342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab580, cid 3, qid 0 00:23:20.084 [2024-12-06 19:22:05.029427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.084 [2024-12-06 19:22:05.029439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.084 [2024-12-06 19:22:05.029445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.029451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab580) on tqpair=0xb49690 00:23:20.084 [2024-12-06 19:22:05.029458] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:20.084 [2024-12-06 19:22:05.029465] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:20.084 [2024-12-06 19:22:05.029480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.029488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.029493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb49690) 00:23:20.084 [2024-12-06 19:22:05.029503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.084 [2024-12-06 19:22:05.029522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab580, cid 3, qid 0 00:23:20.084 [2024-12-06 19:22:05.029600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.084 [2024-12-06 19:22:05.029613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.084 [2024-12-06 19:22:05.029620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.029626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab580) on tqpair=0xb49690 00:23:20.084 [2024-12-06 19:22:05.029641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.029653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.029659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb49690) 00:23:20.084 [2024-12-06 19:22:05.029669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.084 [2024-12-06 19:22:05.029689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab580, cid 3, qid 0 00:23:20.084 [2024-12-06 19:22:05.029790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.084 [2024-12-06 19:22:05.029805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.084 [2024-12-06 19:22:05.029812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.029818] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab580) on tqpair=0xb49690 00:23:20.084 [2024-12-06 19:22:05.029835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.029844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.029850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb49690) 00:23:20.084 [2024-12-06 19:22:05.029859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.084 [2024-12-06 19:22:05.029880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab580, cid 3, qid 0 00:23:20.084 [2024-12-06 19:22:05.029952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.084 [2024-12-06 19:22:05.029965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.084 [2024-12-06 19:22:05.029972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.029978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab580) on tqpair=0xb49690 00:23:20.084 [2024-12-06 19:22:05.029994] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.030002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.084 [2024-12-06 19:22:05.030023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb49690) 00:23:20.084 [2024-12-06 19:22:05.030033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.084 [2024-12-06 19:22:05.030053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab580, cid 3, qid 0 00:23:20.085 [2024-12-06 19:22:05.030125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.085 [2024-12-06 19:22:05.030137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.085 [2024-12-06 19:22:05.030143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.085 [2024-12-06 19:22:05.030149] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab580) on tqpair=0xb49690 00:23:20.085 [2024-12-06 19:22:05.030164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.085 [2024-12-06 19:22:05.030173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.085 [2024-12-06 19:22:05.030178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb49690) 00:23:20.085 [2024-12-06 19:22:05.030188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.085 [2024-12-06 19:22:05.030207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab580, cid 3, qid 0 00:23:20.085 [2024-12-06 19:22:05.030279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.085 [2024-12-06 19:22:05.030290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.085 [2024-12-06 19:22:05.030296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.085 [2024-12-06 19:22:05.030302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab580) on tqpair=0xb49690 00:23:20.085 [2024-12-06 19:22:05.030317] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.085 [2024-12-06 19:22:05.030325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.085 [2024-12-06 19:22:05.030334] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb49690) 00:23:20.085 [2024-12-06 19:22:05.030344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.085 [2024-12-06 19:22:05.030364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab580, cid 3, qid 0 00:23:20.085 [2024-12-06 19:22:05.030437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.085 [2024-12-06 19:22:05.030450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.085 [2024-12-06 19:22:05.030456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.085 [2024-12-06 19:22:05.030462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab580) on tqpair=0xb49690 00:23:20.085 [2024-12-06 19:22:05.030477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.085 [2024-12-06 19:22:05.030486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.085 [2024-12-06 19:22:05.030491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb49690) 00:23:20.085 [2024-12-06 19:22:05.030501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.085 [2024-12-06 19:22:05.030520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab580, cid 3, qid 0 00:23:20.085 [2024-12-06 19:22:05.030589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.085 [2024-12-06 19:22:05.030600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.085 [2024-12-06 19:22:05.030607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.085 [2024-12-06 19:22:05.030613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab580) on tqpair=0xb49690 00:23:20.085 [2024-12-06 19:22:05.030627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.085 [2024-12-06 19:22:05.030636] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.085 [2024-12-06 19:22:05.030642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb49690) 00:23:20.085 [2024-12-06 19:22:05.030651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.085 [2024-12-06 19:22:05.030671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab580, cid 3, qid 0 00:23:20.085 [2024-12-06 19:22:05.034732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.085 [2024-12-06 19:22:05.034749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.085 [2024-12-06 19:22:05.034756] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.085 [2024-12-06 19:22:05.034763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab580) on tqpair=0xb49690 00:23:20.085 [2024-12-06 19:22:05.034780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.085 [2024-12-06 19:22:05.034789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.085 [2024-12-06 19:22:05.034795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb49690) 00:23:20.085 [2024-12-06 19:22:05.034805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.085 [2024-12-06 19:22:05.034827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab580, cid 3, qid 0 00:23:20.085 [2024-12-06 19:22:05.034979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.085 [2024-12-06 19:22:05.034991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.085 [2024-12-06 19:22:05.034997] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.085 [2024-12-06 19:22:05.035019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbab580) on tqpair=0xb49690 00:23:20.085 [2024-12-06 19:22:05.035031] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:23:20.085 0% 00:23:20.085 Data Units Read: 0 00:23:20.085 Data Units Written: 0 00:23:20.085 Host Read Commands: 0 00:23:20.085 Host Write Commands: 0 00:23:20.085 Controller Busy Time: 0 minutes 00:23:20.085 Power Cycles: 0 00:23:20.085 Power On Hours: 0 hours 00:23:20.085 Unsafe Shutdowns: 0 00:23:20.085 Unrecoverable Media Errors: 0 00:23:20.085 Lifetime Error Log Entries: 0 00:23:20.085 Warning Temperature Time: 0 minutes 00:23:20.085 Critical Temperature Time: 0 minutes 00:23:20.085 00:23:20.085 Number of Queues 00:23:20.085 ================ 00:23:20.085 Number of I/O Submission Queues: 127 00:23:20.085 Number of I/O Completion Queues: 127 00:23:20.085 00:23:20.085 Active Namespaces 00:23:20.085 ================= 00:23:20.085 Namespace ID:1 00:23:20.085 Error Recovery Timeout: Unlimited 00:23:20.085 Command Set Identifier: NVM (00h) 00:23:20.085 Deallocate: Supported 00:23:20.085 Deallocated/Unwritten Error: Not Supported 00:23:20.085 Deallocated Read Value: Unknown 00:23:20.085 Deallocate in Write Zeroes: Not Supported 00:23:20.085 Deallocated Guard Field: 0xFFFF 00:23:20.085 Flush: Supported 00:23:20.085 Reservation: Supported 00:23:20.085 Namespace Sharing Capabilities: Multiple Controllers 00:23:20.085 Size (in LBAs): 131072 (0GiB) 00:23:20.085 Capacity (in LBAs): 131072 (0GiB) 00:23:20.085 Utilization (in LBAs): 131072 (0GiB) 00:23:20.085 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:20.085 EUI64: ABCDEF0123456789 00:23:20.085 UUID: c6a27bb6-14cc-484a-b1df-5b3e365ac305 00:23:20.085 Thin Provisioning: Not Supported 00:23:20.085 Per-NS Atomic Units: Yes 00:23:20.085 Atomic Boundary Size (Normal): 0 00:23:20.085 Atomic Boundary Size (PFail): 0 00:23:20.085 Atomic Boundary Offset: 0 00:23:20.085 Maximum Single Source Range Length: 65535 00:23:20.085 Maximum Copy Length: 65535 00:23:20.085 Maximum Source Range Count: 1 00:23:20.085 NGUID/EUI64 Never Reused: No 00:23:20.085 Namespace Write Protected: No 00:23:20.085 Number of LBA Formats: 1 00:23:20.085 Current LBA Format: LBA Format #00 00:23:20.085 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:20.085 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.085 rmmod nvme_tcp 00:23:20.085 rmmod nvme_fabrics 00:23:20.085 rmmod nvme_keyring 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 276954 ']' 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 276954 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 276954 ']' 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 276954 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.085 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 276954 00:23:20.342 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.342 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.342 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 276954' 00:23:20.342 killing process with pid 276954 00:23:20.342 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 276954 00:23:20.342 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 276954 00:23:20.601 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:20.601 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:20.601 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:20.601 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:20.601 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:20.601 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:20.601 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:20.601 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:20.601 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:20.601 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.601 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.601 19:22:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.506 19:22:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:22.506 00:23:22.506 real 0m5.663s 00:23:22.506 user 0m4.806s 00:23:22.506 sys 0m2.017s 00:23:22.506 19:22:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:22.506 19:22:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:22.506 ************************************ 00:23:22.506 END TEST nvmf_identify 00:23:22.506 ************************************ 00:23:22.506 19:22:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:22.506 19:22:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:22.506 19:22:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:22.506 19:22:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.506 ************************************ 00:23:22.506 START TEST nvmf_perf 00:23:22.506 ************************************ 00:23:22.506 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:22.765 * Looking for test storage... 00:23:22.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:22.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.765 --rc genhtml_branch_coverage=1 00:23:22.765 --rc genhtml_function_coverage=1 00:23:22.765 --rc genhtml_legend=1 00:23:22.765 --rc geninfo_all_blocks=1 00:23:22.765 --rc geninfo_unexecuted_blocks=1 00:23:22.765 00:23:22.765 ' 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:22.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.765 --rc genhtml_branch_coverage=1 00:23:22.765 --rc genhtml_function_coverage=1 00:23:22.765 --rc genhtml_legend=1 00:23:22.765 --rc geninfo_all_blocks=1 00:23:22.765 --rc geninfo_unexecuted_blocks=1 00:23:22.765 00:23:22.765 ' 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:22.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.765 --rc genhtml_branch_coverage=1 00:23:22.765 --rc genhtml_function_coverage=1 00:23:22.765 --rc genhtml_legend=1 00:23:22.765 --rc geninfo_all_blocks=1 00:23:22.765 --rc geninfo_unexecuted_blocks=1 00:23:22.765 00:23:22.765 ' 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:22.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.765 --rc genhtml_branch_coverage=1 00:23:22.765 --rc genhtml_function_coverage=1 00:23:22.765 --rc genhtml_legend=1 00:23:22.765 --rc geninfo_all_blocks=1 00:23:22.765 --rc geninfo_unexecuted_blocks=1 00:23:22.765 00:23:22.765 ' 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.765 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:22.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:22.766 19:22:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:25.293 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:25.293 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:25.293 Found net devices under 0000:84:00.0: cvl_0_0 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:25.293 Found net devices under 0000:84:00.1: cvl_0_1 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.293 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:25.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:23:25.294 00:23:25.294 --- 10.0.0.2 ping statistics --- 00:23:25.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.294 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:23:25.294 00:23:25.294 --- 10.0.0.1 ping statistics --- 00:23:25.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.294 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=279063 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 279063 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 279063 ']' 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.294 19:22:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:25.294 [2024-12-06 19:22:10.024207] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:23:25.294 [2024-12-06 19:22:10.024314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.294 [2024-12-06 19:22:10.103274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.294 [2024-12-06 19:22:10.164681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.294 [2024-12-06 19:22:10.164770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.294 [2024-12-06 19:22:10.164792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.294 [2024-12-06 19:22:10.164812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.294 [2024-12-06 19:22:10.164827] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.294 [2024-12-06 19:22:10.166558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.294 [2024-12-06 19:22:10.166625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.294 [2024-12-06 19:22:10.166755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.294 [2024-12-06 19:22:10.166761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.294 19:22:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.294 19:22:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:25.294 19:22:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:25.294 19:22:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.294 19:22:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:25.294 19:22:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.294 19:22:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:25.294 19:22:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:28.591 19:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:28.591 19:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:28.849 19:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:23:28.849 19:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:29.107 19:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:29.107 19:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:23:29.107 19:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:29.107 19:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:29.107 19:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:29.366 [2024-12-06 19:22:14.234774] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.366 19:22:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:29.624 19:22:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:29.624 19:22:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:29.882 19:22:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:29.882 19:22:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:30.140 19:22:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.400 [2024-12-06 19:22:15.310668] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.400 19:22:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:30.657 19:22:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:23:30.657 19:22:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:23:30.657 19:22:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:30.657 19:22:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:23:32.036 Initializing NVMe Controllers 00:23:32.036 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:23:32.036 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:23:32.036 Initialization complete. Launching workers. 00:23:32.036 ======================================================== 00:23:32.036 Latency(us) 00:23:32.036 Device Information : IOPS MiB/s Average min max 00:23:32.036 PCIE (0000:82:00.0) NSID 1 from core 0: 85297.82 333.19 374.62 37.96 6851.69 00:23:32.036 ======================================================== 00:23:32.036 Total : 85297.82 333.19 374.62 37.96 6851.69 00:23:32.036 00:23:32.036 19:22:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:33.412 Initializing NVMe Controllers 00:23:33.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:33.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:33.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:33.412 Initialization complete. Launching workers. 00:23:33.412 ======================================================== 00:23:33.412 Latency(us) 00:23:33.412 Device Information : IOPS MiB/s Average min max 00:23:33.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 86.78 0.34 11813.82 138.31 45807.44 00:23:33.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 60.85 0.24 17075.21 6982.54 47887.09 00:23:33.412 ======================================================== 00:23:33.412 Total : 147.62 0.58 13982.36 138.31 47887.09 00:23:33.412 00:23:33.412 19:22:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:34.348 Initializing NVMe Controllers 00:23:34.348 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:34.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:34.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:34.348 Initialization complete. Launching workers. 00:23:34.348 ======================================================== 00:23:34.348 Latency(us) 00:23:34.348 Device Information : IOPS MiB/s Average min max 00:23:34.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8595.99 33.58 3739.09 647.77 7720.32 00:23:34.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3914.54 15.29 8208.39 6032.75 15787.03 00:23:34.348 ======================================================== 00:23:34.348 Total : 12510.52 48.87 5137.53 647.77 15787.03 00:23:34.348 00:23:34.348 19:22:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:34.348 19:22:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:34.348 19:22:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:37.637 Initializing NVMe Controllers 00:23:37.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:37.637 Controller IO queue size 128, less than required. 00:23:37.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:37.637 Controller IO queue size 128, less than required. 00:23:37.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:37.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:37.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:37.637 Initialization complete. Launching workers. 00:23:37.637 ======================================================== 00:23:37.637 Latency(us) 00:23:37.637 Device Information : IOPS MiB/s Average min max 00:23:37.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1306.40 326.60 100683.07 60376.40 164743.47 00:23:37.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 588.46 147.11 225657.99 112760.21 334576.35 00:23:37.637 ======================================================== 00:23:37.637 Total : 1894.86 473.71 139494.55 60376.40 334576.35 00:23:37.637 00:23:37.637 19:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:37.637 No valid NVMe controllers or AIO or URING devices found 00:23:37.637 Initializing NVMe Controllers 00:23:37.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:37.637 Controller IO queue size 128, less than required. 00:23:37.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:37.637 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:37.637 Controller IO queue size 128, less than required. 00:23:37.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:37.637 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:37.637 WARNING: Some requested NVMe devices were skipped 00:23:37.637 19:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:40.170 Initializing NVMe Controllers 00:23:40.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:40.170 Controller IO queue size 128, less than required. 00:23:40.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.170 Controller IO queue size 128, less than required. 00:23:40.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:40.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:40.170 Initialization complete. Launching workers. 00:23:40.170 00:23:40.170 ==================== 00:23:40.170 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:40.170 TCP transport: 00:23:40.170 polls: 7883 00:23:40.170 idle_polls: 5465 00:23:40.170 sock_completions: 2418 00:23:40.170 nvme_completions: 4819 00:23:40.170 submitted_requests: 7244 00:23:40.170 queued_requests: 1 00:23:40.170 00:23:40.170 ==================== 00:23:40.170 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:40.170 TCP transport: 00:23:40.170 polls: 8054 00:23:40.170 idle_polls: 5570 00:23:40.170 sock_completions: 2484 00:23:40.170 nvme_completions: 4953 00:23:40.170 submitted_requests: 7452 00:23:40.170 queued_requests: 1 00:23:40.170 ======================================================== 00:23:40.170 Latency(us) 00:23:40.170 Device Information : IOPS MiB/s Average min max 00:23:40.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1203.04 300.76 110695.05 61328.44 191767.12 00:23:40.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1236.50 309.13 103836.61 55352.85 149906.01 00:23:40.171 ======================================================== 00:23:40.171 Total : 2439.54 609.89 107218.79 55352.85 191767.12 00:23:40.171 00:23:40.171 19:22:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:40.171 19:22:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:40.171 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:40.171 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:40.171 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:40.171 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:40.171 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:40.171 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:40.171 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:40.171 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:40.171 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:40.431 rmmod nvme_tcp 00:23:40.431 rmmod nvme_fabrics 00:23:40.431 rmmod nvme_keyring 00:23:40.431 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:40.431 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:40.431 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:40.431 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 279063 ']' 00:23:40.431 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 279063 00:23:40.431 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 279063 ']' 00:23:40.431 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 279063 00:23:40.431 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:40.431 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.431 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279063 00:23:40.431 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:40.431 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:40.431 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279063' 00:23:40.431 killing process with pid 279063 00:23:40.431 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 279063 00:23:40.431 19:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 279063 00:23:42.332 19:22:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:42.332 19:22:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:42.332 19:22:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:42.332 19:22:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:42.332 19:22:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:42.332 19:22:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:42.332 19:22:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:42.333 19:22:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:42.333 19:22:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:42.333 19:22:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.333 19:22:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.333 19:22:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.243 19:22:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:44.243 00:23:44.243 real 0m21.451s 00:23:44.243 user 1m5.786s 00:23:44.243 sys 0m6.010s 00:23:44.243 19:22:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.243 19:22:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:44.243 ************************************ 00:23:44.243 END TEST nvmf_perf 00:23:44.243 ************************************ 00:23:44.243 19:22:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:44.243 19:22:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:44.243 19:22:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:44.243 19:22:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.243 ************************************ 00:23:44.244 START TEST nvmf_fio_host 00:23:44.244 ************************************ 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:44.244 * Looking for test storage... 00:23:44.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:44.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.244 --rc genhtml_branch_coverage=1 00:23:44.244 --rc genhtml_function_coverage=1 00:23:44.244 --rc genhtml_legend=1 00:23:44.244 --rc geninfo_all_blocks=1 00:23:44.244 --rc geninfo_unexecuted_blocks=1 00:23:44.244 00:23:44.244 ' 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:44.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.244 --rc genhtml_branch_coverage=1 00:23:44.244 --rc genhtml_function_coverage=1 00:23:44.244 --rc genhtml_legend=1 00:23:44.244 --rc geninfo_all_blocks=1 00:23:44.244 --rc geninfo_unexecuted_blocks=1 00:23:44.244 00:23:44.244 ' 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:44.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.244 --rc genhtml_branch_coverage=1 00:23:44.244 --rc genhtml_function_coverage=1 00:23:44.244 --rc genhtml_legend=1 00:23:44.244 --rc geninfo_all_blocks=1 00:23:44.244 --rc geninfo_unexecuted_blocks=1 00:23:44.244 00:23:44.244 ' 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:44.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.244 --rc genhtml_branch_coverage=1 00:23:44.244 --rc genhtml_function_coverage=1 00:23:44.244 --rc genhtml_legend=1 00:23:44.244 --rc geninfo_all_blocks=1 00:23:44.244 --rc geninfo_unexecuted_blocks=1 00:23:44.244 00:23:44.244 ' 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.244 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:44.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:44.245 19:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:46.219 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:46.219 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:46.219 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:46.220 Found net devices under 0000:84:00.0: cvl_0_0 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:46.220 Found net devices under 0000:84:00.1: cvl_0_1 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:46.220 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:46.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:23:46.502 00:23:46.502 --- 10.0.0.2 ping statistics --- 00:23:46.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.502 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:23:46.502 00:23:46.502 --- 10.0.0.1 ping statistics --- 00:23:46.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.502 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=282929 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 282929 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 282929 ']' 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.502 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.502 [2024-12-06 19:22:31.389950] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:23:46.502 [2024-12-06 19:22:31.390033] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.502 [2024-12-06 19:22:31.462096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:46.502 [2024-12-06 19:22:31.519491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.502 [2024-12-06 19:22:31.519548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.502 [2024-12-06 19:22:31.519578] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.502 [2024-12-06 19:22:31.519589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.502 [2024-12-06 19:22:31.519599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.502 [2024-12-06 19:22:31.521235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.502 [2024-12-06 19:22:31.521293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.502 [2024-12-06 19:22:31.521359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:46.502 [2024-12-06 19:22:31.521362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.795 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.795 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:46.795 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:47.099 [2024-12-06 19:22:31.908085] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.099 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:47.099 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:47.099 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.099 19:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:47.400 Malloc1 00:23:47.400 19:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:47.690 19:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:47.949 19:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:48.207 [2024-12-06 19:22:33.184335] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.207 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:48.465 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:48.465 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:48.465 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:48.465 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:48.465 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:48.465 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:48.465 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:48.465 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:48.466 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:48.466 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:48.466 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:48.466 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:48.466 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:48.466 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:48.466 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:48.466 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:48.466 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:48.466 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:48.466 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:48.466 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:48.466 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:48.466 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:48.466 19:22:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:48.724 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:48.724 fio-3.35 00:23:48.724 Starting 1 thread 00:23:51.263 00:23:51.263 test: (groupid=0, jobs=1): err= 0: pid=283414: Fri Dec 6 19:22:36 2024 00:23:51.263 read: IOPS=8829, BW=34.5MiB/s (36.2MB/s)(69.2MiB/2007msec) 00:23:51.263 slat (usec): min=2, max=160, avg= 2.80, stdev= 2.20 00:23:51.263 clat (usec): min=2551, max=13409, avg=7898.54, stdev=648.51 00:23:51.263 lat (usec): min=2577, max=13412, avg=7901.33, stdev=648.39 00:23:51.263 clat percentiles (usec): 00:23:51.263 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:23:51.263 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8094], 00:23:51.263 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8848], 00:23:51.263 | 99.00th=[ 9372], 99.50th=[ 9503], 99.90th=[11469], 99.95th=[12911], 00:23:51.263 | 99.99th=[13304] 00:23:51.263 bw ( KiB/s): min=34280, max=36048, per=99.98%, avg=35310.00, stdev=741.93, samples=4 00:23:51.263 iops : min= 8570, max= 9012, avg=8827.50, stdev=185.48, samples=4 00:23:51.263 write: IOPS=8845, BW=34.6MiB/s (36.2MB/s)(69.3MiB/2007msec); 0 zone resets 00:23:51.263 slat (usec): min=2, max=142, avg= 3.00, stdev= 1.93 00:23:51.263 clat (usec): min=1398, max=12665, avg=6529.42, stdev=545.44 00:23:51.263 lat (usec): min=1406, max=12668, avg=6532.43, stdev=545.35 00:23:51.263 clat percentiles (usec): 00:23:51.263 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:23:51.263 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:23:51.263 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7177], 95.00th=[ 7308], 00:23:51.263 | 99.00th=[ 7701], 99.50th=[ 7832], 99.90th=[10683], 99.95th=[11600], 00:23:51.263 | 99.99th=[12649] 00:23:51.263 bw ( KiB/s): min=35152, max=35592, per=100.00%, avg=35380.00, stdev=206.40, samples=4 00:23:51.263 iops : min= 8788, max= 8898, avg=8845.00, stdev=51.60, samples=4 00:23:51.263 lat (msec) : 2=0.03%, 4=0.10%, 10=99.74%, 20=0.14% 00:23:51.263 cpu : usr=70.14%, sys=28.61%, ctx=66, majf=0, minf=30 00:23:51.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:51.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:51.263 issued rwts: total=17720,17752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:51.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:51.263 00:23:51.263 Run status group 0 (all jobs): 00:23:51.263 READ: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=69.2MiB (72.6MB), run=2007-2007msec 00:23:51.263 WRITE: bw=34.6MiB/s (36.2MB/s), 34.6MiB/s-34.6MiB/s (36.2MB/s-36.2MB/s), io=69.3MiB (72.7MB), run=2007-2007msec 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:51.263 19:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:51.523 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:51.523 fio-3.35 00:23:51.523 Starting 1 thread 00:23:54.053 00:23:54.053 test: (groupid=0, jobs=1): err= 0: pid=283753: Fri Dec 6 19:22:38 2024 00:23:54.053 read: IOPS=8140, BW=127MiB/s (133MB/s)(255MiB/2007msec) 00:23:54.053 slat (usec): min=2, max=128, avg= 4.43, stdev= 2.75 00:23:54.053 clat (usec): min=2095, max=17730, avg=9260.62, stdev=2163.48 00:23:54.053 lat (usec): min=2098, max=17735, avg=9265.04, stdev=2163.50 00:23:54.053 clat percentiles (usec): 00:23:54.053 | 1.00th=[ 4948], 5.00th=[ 5997], 10.00th=[ 6521], 20.00th=[ 7373], 00:23:54.053 | 30.00th=[ 8029], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9765], 00:23:54.053 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11994], 95.00th=[13173], 00:23:54.053 | 99.00th=[15270], 99.50th=[15795], 99.90th=[16909], 99.95th=[17433], 00:23:54.053 | 99.99th=[17695] 00:23:54.053 bw ( KiB/s): min=58368, max=75808, per=50.06%, avg=65200.00, stdev=7449.45, samples=4 00:23:54.053 iops : min= 3648, max= 4738, avg=4075.00, stdev=465.59, samples=4 00:23:54.053 write: IOPS=4808, BW=75.1MiB/s (78.8MB/s)(134MiB/1788msec); 0 zone resets 00:23:54.053 slat (usec): min=30, max=175, avg=38.81, stdev= 8.12 00:23:54.053 clat (usec): min=6045, max=19488, avg=11747.85, stdev=1908.29 00:23:54.053 lat (usec): min=6080, max=19534, avg=11786.66, stdev=1908.59 00:23:54.053 clat percentiles (usec): 00:23:54.053 | 1.00th=[ 8160], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10159], 00:23:54.053 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11600], 60.00th=[11994], 00:23:54.053 | 70.00th=[12518], 80.00th=[13173], 90.00th=[14353], 95.00th=[15270], 00:23:54.053 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18744], 99.95th=[19006], 00:23:54.053 | 99.99th=[19530] 00:23:54.053 bw ( KiB/s): min=60832, max=78848, per=88.68%, avg=68224.00, stdev=7590.55, samples=4 00:23:54.053 iops : min= 3802, max= 4928, avg=4264.00, stdev=474.41, samples=4 00:23:54.053 lat (msec) : 4=0.20%, 10=48.33%, 20=51.48% 00:23:54.053 cpu : usr=79.86%, sys=17.85%, ctx=77, majf=0, minf=64 00:23:54.053 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:23:54.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:54.053 issued rwts: total=16337,8597,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.053 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:54.053 00:23:54.053 Run status group 0 (all jobs): 00:23:54.053 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=255MiB (268MB), run=2007-2007msec 00:23:54.053 WRITE: bw=75.1MiB/s (78.8MB/s), 75.1MiB/s-75.1MiB/s (78.8MB/s-78.8MB/s), io=134MiB (141MB), run=1788-1788msec 00:23:54.053 19:22:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.053 19:22:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:54.053 19:22:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:54.053 19:22:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:54.053 19:22:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:54.053 19:22:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:54.053 19:22:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:54.053 19:22:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:54.053 19:22:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:54.053 19:22:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:54.053 19:22:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:54.053 rmmod nvme_tcp 00:23:54.053 rmmod nvme_fabrics 00:23:54.053 rmmod nvme_keyring 00:23:54.053 19:22:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:54.053 19:22:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:54.053 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:54.053 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 282929 ']' 00:23:54.053 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 282929 00:23:54.053 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 282929 ']' 00:23:54.053 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 282929 00:23:54.053 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:54.053 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.053 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 282929 00:23:54.053 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.053 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.053 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 282929' 00:23:54.053 killing process with pid 282929 00:23:54.053 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 282929 00:23:54.053 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 282929 00:23:54.314 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:54.314 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:54.314 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:54.314 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:54.314 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:54.314 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:54.314 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:54.314 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:54.314 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:54.314 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.314 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.314 19:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:56.852 00:23:56.852 real 0m12.331s 00:23:56.852 user 0m37.124s 00:23:56.852 sys 0m3.834s 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.852 ************************************ 00:23:56.852 END TEST nvmf_fio_host 00:23:56.852 ************************************ 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.852 ************************************ 00:23:56.852 START TEST nvmf_failover 00:23:56.852 ************************************ 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:56.852 * Looking for test storage... 00:23:56.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:56.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.852 --rc genhtml_branch_coverage=1 00:23:56.852 --rc genhtml_function_coverage=1 00:23:56.852 --rc genhtml_legend=1 00:23:56.852 --rc geninfo_all_blocks=1 00:23:56.852 --rc geninfo_unexecuted_blocks=1 00:23:56.852 00:23:56.852 ' 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:56.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.852 --rc genhtml_branch_coverage=1 00:23:56.852 --rc genhtml_function_coverage=1 00:23:56.852 --rc genhtml_legend=1 00:23:56.852 --rc geninfo_all_blocks=1 00:23:56.852 --rc geninfo_unexecuted_blocks=1 00:23:56.852 00:23:56.852 ' 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:56.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.852 --rc genhtml_branch_coverage=1 00:23:56.852 --rc genhtml_function_coverage=1 00:23:56.852 --rc genhtml_legend=1 00:23:56.852 --rc geninfo_all_blocks=1 00:23:56.852 --rc geninfo_unexecuted_blocks=1 00:23:56.852 00:23:56.852 ' 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:56.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.852 --rc genhtml_branch_coverage=1 00:23:56.852 --rc genhtml_function_coverage=1 00:23:56.852 --rc genhtml_legend=1 00:23:56.852 --rc geninfo_all_blocks=1 00:23:56.852 --rc geninfo_unexecuted_blocks=1 00:23:56.852 00:23:56.852 ' 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.852 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:56.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:56.853 19:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:58.757 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:58.757 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:58.757 Found net devices under 0000:84:00.0: cvl_0_0 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:58.757 Found net devices under 0000:84:00.1: cvl_0_1 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:58.757 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:58.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:23:58.758 00:23:58.758 --- 10.0.0.2 ping statistics --- 00:23:58.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.758 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:58.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:23:58.758 00:23:58.758 --- 10.0.0.1 ping statistics --- 00:23:58.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.758 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:58.758 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:59.016 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:59.016 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:59.016 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.016 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:59.016 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=286048 00:23:59.016 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:59.016 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 286048 00:23:59.016 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 286048 ']' 00:23:59.016 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.016 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.016 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.016 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.016 19:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:59.016 [2024-12-06 19:22:43.874837] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:23:59.016 [2024-12-06 19:22:43.874934] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.016 [2024-12-06 19:22:43.946894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:59.016 [2024-12-06 19:22:44.003258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.016 [2024-12-06 19:22:44.003314] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.016 [2024-12-06 19:22:44.003334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.016 [2024-12-06 19:22:44.003352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.016 [2024-12-06 19:22:44.003366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.016 [2024-12-06 19:22:44.004981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.016 [2024-12-06 19:22:44.005025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:59.016 [2024-12-06 19:22:44.005043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.274 19:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.274 19:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:59.274 19:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:59.274 19:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.274 19:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:59.274 19:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.274 19:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:59.532 [2024-12-06 19:22:44.390510] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.532 19:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:59.788 Malloc0 00:23:59.788 19:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:00.046 19:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:00.303 19:22:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:00.560 [2024-12-06 19:22:45.516093] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.560 19:22:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:00.817 [2024-12-06 19:22:45.780855] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:00.817 19:22:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:01.076 [2024-12-06 19:22:46.057674] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:01.076 19:22:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=286370 00:24:01.076 19:22:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:01.076 19:22:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:01.076 19:22:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 286370 /var/tmp/bdevperf.sock 00:24:01.076 19:22:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 286370 ']' 00:24:01.076 19:22:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.076 19:22:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.076 19:22:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.076 19:22:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.076 19:22:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:01.334 19:22:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.334 19:22:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:01.334 19:22:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:01.905 NVMe0n1 00:24:01.905 19:22:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:02.164 00:24:02.164 19:22:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=286508 00:24:02.164 19:22:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:02.164 19:22:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:03.543 19:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.543 [2024-12-06 19:22:48.502851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.502942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.502960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.502973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.502985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.502997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.543 [2024-12-06 19:22:48.503361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 [2024-12-06 19:22:48.503645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1948ee0 is same with the state(6) to be set 00:24:03.544 19:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:06.828 19:22:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:07.088 00:24:07.088 19:22:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:07.347 [2024-12-06 19:22:52.258355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.347 [2024-12-06 19:22:52.258430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.347 [2024-12-06 19:22:52.258445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.347 [2024-12-06 19:22:52.258457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.347 [2024-12-06 19:22:52.258469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.347 [2024-12-06 19:22:52.258482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.347 [2024-12-06 19:22:52.258494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.347 [2024-12-06 19:22:52.258505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.347 [2024-12-06 19:22:52.258516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.347 [2024-12-06 19:22:52.258528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.347 [2024-12-06 19:22:52.258553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.347 [2024-12-06 19:22:52.258565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.347 [2024-12-06 19:22:52.258576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.347 [2024-12-06 19:22:52.258587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.347 [2024-12-06 19:22:52.258599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 [2024-12-06 19:22:52.258922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1949c00 is same with the state(6) to be set 00:24:07.348 19:22:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:10.634 19:22:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:10.634 [2024-12-06 19:22:55.589615] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.634 19:22:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:11.566 19:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:12.134 [2024-12-06 19:22:56.892894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a951e0 is same with the state(6) to be set 00:24:12.134 [2024-12-06 19:22:56.892978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a951e0 is same with the state(6) to be set 00:24:12.134 [2024-12-06 19:22:56.892994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a951e0 is same with the state(6) to be set 00:24:12.134 [2024-12-06 19:22:56.893021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a951e0 is same with the state(6) to be set 00:24:12.134 [2024-12-06 19:22:56.893035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a951e0 is same with the state(6) to be set 00:24:12.134 [2024-12-06 19:22:56.893047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a951e0 is same with the state(6) to be set 00:24:12.134 [2024-12-06 19:22:56.893058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a951e0 is same with the state(6) to be set 00:24:12.134 [2024-12-06 19:22:56.893084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a951e0 is same with the state(6) to be set 00:24:12.134 [2024-12-06 19:22:56.893096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a951e0 is same with the state(6) to be set 00:24:12.134 [2024-12-06 19:22:56.893107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a951e0 is same with the state(6) to be set 00:24:12.134 [2024-12-06 19:22:56.893118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a951e0 is same with the state(6) to be set 00:24:12.134 [2024-12-06 19:22:56.893129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a951e0 is same with the state(6) to be set 00:24:12.134 [2024-12-06 19:22:56.893140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a951e0 is same with the state(6) to be set 00:24:12.134 [2024-12-06 19:22:56.893151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a951e0 is same with the state(6) to be set 00:24:12.134 [2024-12-06 19:22:56.893162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a951e0 is same with the state(6) to be set 00:24:12.134 19:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 286508 00:24:17.406 { 00:24:17.406 "results": [ 00:24:17.406 { 00:24:17.406 "job": "NVMe0n1", 00:24:17.406 "core_mask": "0x1", 00:24:17.406 "workload": "verify", 00:24:17.406 "status": "finished", 00:24:17.406 "verify_range": { 00:24:17.406 "start": 0, 00:24:17.406 "length": 16384 00:24:17.406 }, 00:24:17.406 "queue_depth": 128, 00:24:17.406 "io_size": 4096, 00:24:17.406 "runtime": 15.008716, 00:24:17.406 "iops": 8640.779131272788, 00:24:17.406 "mibps": 33.75304348153433, 00:24:17.406 "io_failed": 6717, 00:24:17.406 "io_timeout": 0, 00:24:17.406 "avg_latency_us": 14056.441917571658, 00:24:17.406 "min_latency_us": 807.0637037037037, 00:24:17.406 "max_latency_us": 14563.555555555555 00:24:17.406 } 00:24:17.407 ], 00:24:17.407 "core_count": 1 00:24:17.407 } 00:24:17.407 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 286370 00:24:17.407 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 286370 ']' 00:24:17.407 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 286370 00:24:17.407 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:17.407 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.407 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 286370 00:24:17.407 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:17.407 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:17.407 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 286370' 00:24:17.407 killing process with pid 286370 00:24:17.407 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 286370 00:24:17.407 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 286370 00:24:17.670 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:17.670 [2024-12-06 19:22:46.121519] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:24:17.670 [2024-12-06 19:22:46.121603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid286370 ] 00:24:17.670 [2024-12-06 19:22:46.189921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.670 [2024-12-06 19:22:46.248376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.670 Running I/O for 15 seconds... 00:24:17.670 8590.00 IOPS, 33.55 MiB/s [2024-12-06T18:23:02.719Z] [2024-12-06 19:22:48.506189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.671 [2024-12-06 19:22:48.506240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.506961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.506985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.507001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.507016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.507047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.507061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.507076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.507089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.507104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.507119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.507133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.507149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.507164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.507178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.507192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.507206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.507220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.507234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.507249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.507263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.507277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.507290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.507305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.507318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.507333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.507346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.507365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.507380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.507395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.507409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.507424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.507436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.671 [2024-12-06 19:22:48.507451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.671 [2024-12-06 19:22:48.507465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.507480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.507494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.507508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.507522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.507536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.507550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.507564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.507578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.507592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.507606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.507621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.507635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.507649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.507663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.507677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.507692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.507730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.507746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.507766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.507780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.507795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.507810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.507825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.507839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.507855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.507869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.507885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.507898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.507914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.507927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.507942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.507957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.507972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.507986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.672 [2024-12-06 19:22:48.508232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.672 [2024-12-06 19:22:48.508260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.672 [2024-12-06 19:22:48.508288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.672 [2024-12-06 19:22:48.508625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.672 [2024-12-06 19:22:48.508639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.508654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.508667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.508682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.508695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.508710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.508746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.508765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.508779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.508794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.508808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.508829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.508843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.508858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.508872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.508887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.508905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.508921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.508935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.508951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.508964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.508979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.508993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.509022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.509067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.509095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.509123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.509151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.509179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.509207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.509235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.673 [2024-12-06 19:22:48.509264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.673 [2024-12-06 19:22:48.509316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84552 len:8 PRP1 0x0 PRP2 0x0 00:24:17.673 [2024-12-06 19:22:48.509335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.673 [2024-12-06 19:22:48.509366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.673 [2024-12-06 19:22:48.509377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84560 len:8 PRP1 0x0 PRP2 0x0 00:24:17.673 [2024-12-06 19:22:48.509390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.673 [2024-12-06 19:22:48.509413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.673 [2024-12-06 19:22:48.509424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84568 len:8 PRP1 0x0 PRP2 0x0 00:24:17.673 [2024-12-06 19:22:48.509436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.673 [2024-12-06 19:22:48.509459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.673 [2024-12-06 19:22:48.509469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84576 len:8 PRP1 0x0 PRP2 0x0 00:24:17.673 [2024-12-06 19:22:48.509481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.673 [2024-12-06 19:22:48.509504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.673 [2024-12-06 19:22:48.509515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84584 len:8 PRP1 0x0 PRP2 0x0 00:24:17.673 [2024-12-06 19:22:48.509527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.673 [2024-12-06 19:22:48.509549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.673 [2024-12-06 19:22:48.509559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84592 len:8 PRP1 0x0 PRP2 0x0 00:24:17.673 [2024-12-06 19:22:48.509572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.673 [2024-12-06 19:22:48.509595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.673 [2024-12-06 19:22:48.509605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84600 len:8 PRP1 0x0 PRP2 0x0 00:24:17.673 [2024-12-06 19:22:48.509617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.673 [2024-12-06 19:22:48.509640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.673 [2024-12-06 19:22:48.509650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84608 len:8 PRP1 0x0 PRP2 0x0 00:24:17.673 [2024-12-06 19:22:48.509662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.673 [2024-12-06 19:22:48.509688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.673 [2024-12-06 19:22:48.509699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84616 len:8 PRP1 0x0 PRP2 0x0 00:24:17.673 [2024-12-06 19:22:48.509716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.673 [2024-12-06 19:22:48.509767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.673 [2024-12-06 19:22:48.509778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84624 len:8 PRP1 0x0 PRP2 0x0 00:24:17.673 [2024-12-06 19:22:48.509791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.673 [2024-12-06 19:22:48.509814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.673 [2024-12-06 19:22:48.509825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84632 len:8 PRP1 0x0 PRP2 0x0 00:24:17.673 [2024-12-06 19:22:48.509837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.673 [2024-12-06 19:22:48.509851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.673 [2024-12-06 19:22:48.509861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.673 [2024-12-06 19:22:48.509872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84640 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.509885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.509898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.674 [2024-12-06 19:22:48.509908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.674 [2024-12-06 19:22:48.509919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84648 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.509931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.509944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.674 [2024-12-06 19:22:48.509955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.674 [2024-12-06 19:22:48.509966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84656 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.509978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.509991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.674 [2024-12-06 19:22:48.510001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.674 [2024-12-06 19:22:48.510012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84664 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.510024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.674 [2024-12-06 19:22:48.510064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.674 [2024-12-06 19:22:48.510074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84672 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.510090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.674 [2024-12-06 19:22:48.510114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.674 [2024-12-06 19:22:48.510125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84680 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.510144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.674 [2024-12-06 19:22:48.510168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.674 [2024-12-06 19:22:48.510179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84688 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.510197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.674 [2024-12-06 19:22:48.510221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.674 [2024-12-06 19:22:48.510231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84696 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.510243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.674 [2024-12-06 19:22:48.510266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.674 [2024-12-06 19:22:48.510276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84704 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.510289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.674 [2024-12-06 19:22:48.510311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.674 [2024-12-06 19:22:48.510322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84712 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.510334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.674 [2024-12-06 19:22:48.510357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.674 [2024-12-06 19:22:48.510368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84720 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.510380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.674 [2024-12-06 19:22:48.510402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.674 [2024-12-06 19:22:48.510413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84728 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.510426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.674 [2024-12-06 19:22:48.510448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.674 [2024-12-06 19:22:48.510462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84736 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.510475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.674 [2024-12-06 19:22:48.510498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.674 [2024-12-06 19:22:48.510508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84744 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.510525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.674 [2024-12-06 19:22:48.510548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.674 [2024-12-06 19:22:48.510559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84752 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.510571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.674 [2024-12-06 19:22:48.510594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.674 [2024-12-06 19:22:48.510604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84760 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.510616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.674 [2024-12-06 19:22:48.510639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.674 [2024-12-06 19:22:48.510649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84768 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.510661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.674 [2024-12-06 19:22:48.510684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.674 [2024-12-06 19:22:48.510694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84776 len:8 PRP1 0x0 PRP2 0x0 00:24:17.674 [2024-12-06 19:22:48.510706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510798] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:17.674 [2024-12-06 19:22:48.510843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.674 [2024-12-06 19:22:48.510861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.674 [2024-12-06 19:22:48.510890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.674 [2024-12-06 19:22:48.510916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.674 [2024-12-06 19:22:48.510948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:48.510961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:17.674 [2024-12-06 19:22:48.514274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:17.674 [2024-12-06 19:22:48.514314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cd820 (9): Bad file descriptor 00:24:17.674 [2024-12-06 19:22:48.584863] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:17.674 8284.50 IOPS, 32.36 MiB/s [2024-12-06T18:23:02.723Z] 8412.00 IOPS, 32.86 MiB/s [2024-12-06T18:23:02.723Z] 8472.00 IOPS, 33.09 MiB/s [2024-12-06T18:23:02.723Z] [2024-12-06 19:22:52.259076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.674 [2024-12-06 19:22:52.259123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:52.259153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.674 [2024-12-06 19:22:52.259169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:52.259185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.674 [2024-12-06 19:22:52.259198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.674 [2024-12-06 19:22:52.259214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.259981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.259996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.260009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.260023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.260037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.260066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.260080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.260094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.675 [2024-12-06 19:22:52.260120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.260136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.675 [2024-12-06 19:22:52.260149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.260163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.675 [2024-12-06 19:22:52.260176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.260194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.675 [2024-12-06 19:22:52.260208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.675 [2024-12-06 19:22:52.260222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.675 [2024-12-06 19:22:52.260235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.260973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.260988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.261002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.261016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.261039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.261069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.261087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.261102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.261115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.261129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.261142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.261156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.261169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.261183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.261196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.261211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.261224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.261238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.261251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.261266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.261279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.261293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.261306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.261324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.261337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.261352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.261365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.261379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.676 [2024-12-06 19:22:52.261391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.676 [2024-12-06 19:22:52.261406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.261419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.261446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.261473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.261501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.261533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.677 [2024-12-06 19:22:52.261561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.677 [2024-12-06 19:22:52.261588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.677 [2024-12-06 19:22:52.261614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.677 [2024-12-06 19:22:52.261642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.677 [2024-12-06 19:22:52.261676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.677 [2024-12-06 19:22:52.261705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.677 [2024-12-06 19:22:52.261756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.261785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.261814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.261842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.261869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.261898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.261926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.261954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.261982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.261997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.677 [2024-12-06 19:22:52.262543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.677 [2024-12-06 19:22:52.262557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.678 [2024-12-06 19:22:52.262570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:52.262585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.678 [2024-12-06 19:22:52.262598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:52.262634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.678 [2024-12-06 19:22:52.262650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100440 len:8 PRP1 0x0 PRP2 0x0 00:24:17.678 [2024-12-06 19:22:52.262663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:52.262681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.678 [2024-12-06 19:22:52.262692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.678 [2024-12-06 19:22:52.262702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100448 len:8 PRP1 0x0 PRP2 0x0 00:24:17.678 [2024-12-06 19:22:52.262714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:52.262751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.678 [2024-12-06 19:22:52.262762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.678 [2024-12-06 19:22:52.262779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100456 len:8 PRP1 0x0 PRP2 0x0 00:24:17.678 [2024-12-06 19:22:52.262791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:52.262804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.678 [2024-12-06 19:22:52.262815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.678 [2024-12-06 19:22:52.262825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100464 len:8 PRP1 0x0 PRP2 0x0 00:24:17.678 [2024-12-06 19:22:52.262837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:52.262854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.678 [2024-12-06 19:22:52.262865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.678 [2024-12-06 19:22:52.262876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100472 len:8 PRP1 0x0 PRP2 0x0 00:24:17.678 [2024-12-06 19:22:52.262888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:52.262900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.678 [2024-12-06 19:22:52.262915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.678 [2024-12-06 19:22:52.262926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100480 len:8 PRP1 0x0 PRP2 0x0 00:24:17.678 [2024-12-06 19:22:52.262938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:52.262956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.678 [2024-12-06 19:22:52.262966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.678 [2024-12-06 19:22:52.262976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100488 len:8 PRP1 0x0 PRP2 0x0 00:24:17.678 [2024-12-06 19:22:52.262989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:52.263001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.678 [2024-12-06 19:22:52.263011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.678 [2024-12-06 19:22:52.263022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100496 len:8 PRP1 0x0 PRP2 0x0 00:24:17.678 [2024-12-06 19:22:52.263034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:52.263046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.678 [2024-12-06 19:22:52.263056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.678 [2024-12-06 19:22:52.263067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100504 len:8 PRP1 0x0 PRP2 0x0 00:24:17.678 [2024-12-06 19:22:52.263079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:52.263091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.678 [2024-12-06 19:22:52.263101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.678 [2024-12-06 19:22:52.263112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100512 len:8 PRP1 0x0 PRP2 0x0 00:24:17.678 [2024-12-06 19:22:52.263124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:52.263136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.678 [2024-12-06 19:22:52.263146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.678 [2024-12-06 19:22:52.263156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99816 len:8 PRP1 0x0 PRP2 0x0 00:24:17.678 [2024-12-06 19:22:52.263169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:52.263243] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:17.678 [2024-12-06 19:22:52.263282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.678 [2024-12-06 19:22:52.263304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:52.263319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.678 [2024-12-06 19:22:52.263332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:52.263345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.678 [2024-12-06 19:22:52.263358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:52.263371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.678 [2024-12-06 19:22:52.263383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:52.263402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:17.678 [2024-12-06 19:22:52.263458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cd820 (9): Bad file descriptor 00:24:17.678 [2024-12-06 19:22:52.266790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:17.678 [2024-12-06 19:22:52.288932] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:17.678 8473.00 IOPS, 33.10 MiB/s [2024-12-06T18:23:02.727Z] 8530.33 IOPS, 33.32 MiB/s [2024-12-06T18:23:02.727Z] 8555.14 IOPS, 33.42 MiB/s [2024-12-06T18:23:02.727Z] 8579.12 IOPS, 33.51 MiB/s [2024-12-06T18:23:02.727Z] 8589.00 IOPS, 33.55 MiB/s [2024-12-06T18:23:02.727Z] [2024-12-06 19:22:56.893297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.678 [2024-12-06 19:22:56.893350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:56.893378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.678 [2024-12-06 19:22:56.893394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:56.893410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.678 [2024-12-06 19:22:56.893423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:56.893439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.678 [2024-12-06 19:22:56.893452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:56.893467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.678 [2024-12-06 19:22:56.893480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:56.893495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.678 [2024-12-06 19:22:56.893508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:56.893523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.678 [2024-12-06 19:22:56.893536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:56.893562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.678 [2024-12-06 19:22:56.893576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:56.893590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.678 [2024-12-06 19:22:56.893604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:56.893618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.678 [2024-12-06 19:22:56.893632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:56.893646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.678 [2024-12-06 19:22:56.893659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:56.893673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.678 [2024-12-06 19:22:56.893686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.678 [2024-12-06 19:22:56.893716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.893742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.893759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.679 [2024-12-06 19:22:56.893773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.893795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.679 [2024-12-06 19:22:56.893810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.893825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.893839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.893854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.893868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.893883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.893897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.893912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.893925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.893940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.893959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.893974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.893988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.679 [2024-12-06 19:22:56.894541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.679 [2024-12-06 19:22:56.894569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.679 [2024-12-06 19:22:56.894596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.679 [2024-12-06 19:22:56.894624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.679 [2024-12-06 19:22:56.894652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.679 [2024-12-06 19:22:56.894679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.679 [2024-12-06 19:22:56.894735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.679 [2024-12-06 19:22:56.894766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.679 [2024-12-06 19:22:56.894781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.894795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.894810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.894824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.894840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.894854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.894868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.894882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.894896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.894910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.894925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.894938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.894953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.894967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.894981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.894995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.680 [2024-12-06 19:22:56.895037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:36040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.680 [2024-12-06 19:22:56.895978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.680 [2024-12-06 19:22:56.895993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.896731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.681 [2024-12-06 19:22:56.896765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.681 [2024-12-06 19:22:56.896794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.681 [2024-12-06 19:22:56.896824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.681 [2024-12-06 19:22:56.896853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.681 [2024-12-06 19:22:56.896882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.681 [2024-12-06 19:22:56.896912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.681 [2024-12-06 19:22:56.896941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.681 [2024-12-06 19:22:56.896971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.896986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.897003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.897019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.897047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.897063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.897076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.897091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.897105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.897119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.897133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.897148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.897161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.897176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.681 [2024-12-06 19:22:56.897189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.681 [2024-12-06 19:22:56.897203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12270f0 is same with the state(6) to be set 00:24:17.682 [2024-12-06 19:22:56.897221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.682 [2024-12-06 19:22:56.897232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.682 [2024-12-06 19:22:56.897243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36480 len:8 PRP1 0x0 PRP2 0x0 00:24:17.682 [2024-12-06 19:22:56.897255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.682 [2024-12-06 19:22:56.897325] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:17.682 [2024-12-06 19:22:56.897363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.682 [2024-12-06 19:22:56.897381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.682 [2024-12-06 19:22:56.897395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.682 [2024-12-06 19:22:56.897408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.682 [2024-12-06 19:22:56.897421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.682 [2024-12-06 19:22:56.897439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.682 [2024-12-06 19:22:56.897464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.682 [2024-12-06 19:22:56.897484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.682 [2024-12-06 19:22:56.897505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:17.682 [2024-12-06 19:22:56.900855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:17.682 [2024-12-06 19:22:56.900896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cd820 (9): Bad file descriptor 00:24:17.682 [2024-12-06 19:22:56.963209] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:17.682 8549.80 IOPS, 33.40 MiB/s [2024-12-06T18:23:02.731Z] 8568.55 IOPS, 33.47 MiB/s [2024-12-06T18:23:02.731Z] 8597.42 IOPS, 33.58 MiB/s [2024-12-06T18:23:02.731Z] 8615.92 IOPS, 33.66 MiB/s [2024-12-06T18:23:02.731Z] 8626.43 IOPS, 33.70 MiB/s [2024-12-06T18:23:02.731Z] 8640.93 IOPS, 33.75 MiB/s 00:24:17.682 Latency(us) 00:24:17.682 [2024-12-06T18:23:02.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.682 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:17.682 Verification LBA range: start 0x0 length 0x4000 00:24:17.682 NVMe0n1 : 15.01 8640.78 33.75 447.54 0.00 14056.44 807.06 14563.56 00:24:17.682 [2024-12-06T18:23:02.731Z] =================================================================================================================== 00:24:17.682 [2024-12-06T18:23:02.731Z] Total : 8640.78 33.75 447.54 0.00 14056.44 807.06 14563.56 00:24:17.682 Received shutdown signal, test time was about 15.000000 seconds 00:24:17.682 00:24:17.682 Latency(us) 00:24:17.682 [2024-12-06T18:23:02.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.682 [2024-12-06T18:23:02.731Z] =================================================================================================================== 00:24:17.682 [2024-12-06T18:23:02.731Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.682 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:17.682 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:17.682 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:17.682 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=288238 00:24:17.682 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:17.682 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 288238 /var/tmp/bdevperf.sock 00:24:17.682 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 288238 ']' 00:24:17.682 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.682 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.682 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.682 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.682 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:17.939 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.939 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:17.939 19:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:18.196 [2024-12-06 19:23:03.163323] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:18.196 19:23:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:18.454 [2024-12-06 19:23:03.484210] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:18.713 19:23:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:18.971 NVMe0n1 00:24:18.972 19:23:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:19.539 00:24:19.539 19:23:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:19.800 00:24:20.061 19:23:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:20.061 19:23:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:20.319 19:23:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:20.576 19:23:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:23.861 19:23:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:23.861 19:23:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:23.861 19:23:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=289026 00:24:23.861 19:23:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:23.861 19:23:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 289026 00:24:24.800 { 00:24:24.800 "results": [ 00:24:24.800 { 00:24:24.800 "job": "NVMe0n1", 00:24:24.800 "core_mask": "0x1", 00:24:24.800 "workload": "verify", 00:24:24.800 "status": "finished", 00:24:24.800 "verify_range": { 00:24:24.800 "start": 0, 00:24:24.800 "length": 16384 00:24:24.800 }, 00:24:24.800 "queue_depth": 128, 00:24:24.800 "io_size": 4096, 00:24:24.800 "runtime": 1.011929, 00:24:24.800 "iops": 8692.309440682104, 00:24:24.800 "mibps": 33.95433375266447, 00:24:24.800 "io_failed": 0, 00:24:24.800 "io_timeout": 0, 00:24:24.800 "avg_latency_us": 14662.842490694424, 00:24:24.800 "min_latency_us": 3082.6192592592593, 00:24:24.800 "max_latency_us": 12718.838518518518 00:24:24.800 } 00:24:24.800 ], 00:24:24.800 "core_count": 1 00:24:24.800 } 00:24:25.059 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:25.059 [2024-12-06 19:23:02.670619] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:24:25.059 [2024-12-06 19:23:02.670698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid288238 ] 00:24:25.059 [2024-12-06 19:23:02.738102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.059 [2024-12-06 19:23:02.793622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.059 [2024-12-06 19:23:05.388306] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:25.059 [2024-12-06 19:23:05.388387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.059 [2024-12-06 19:23:05.388410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.059 [2024-12-06 19:23:05.388426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.059 [2024-12-06 19:23:05.388440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.059 [2024-12-06 19:23:05.388460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.059 [2024-12-06 19:23:05.388473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.059 [2024-12-06 19:23:05.388487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.059 [2024-12-06 19:23:05.388500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.059 [2024-12-06 19:23:05.388513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:25.059 [2024-12-06 19:23:05.388557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:25.059 [2024-12-06 19:23:05.388598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc63820 (9): Bad file descriptor 00:24:25.059 [2024-12-06 19:23:05.439064] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:25.059 Running I/O for 1 seconds... 00:24:25.059 8668.00 IOPS, 33.86 MiB/s 00:24:25.059 Latency(us) 00:24:25.059 [2024-12-06T18:23:10.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.059 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:25.059 Verification LBA range: start 0x0 length 0x4000 00:24:25.059 NVMe0n1 : 1.01 8692.31 33.95 0.00 0.00 14662.84 3082.62 12718.84 00:24:25.059 [2024-12-06T18:23:10.108Z] =================================================================================================================== 00:24:25.059 [2024-12-06T18:23:10.108Z] Total : 8692.31 33.95 0.00 0.00 14662.84 3082.62 12718.84 00:24:25.059 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:25.059 19:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:25.319 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:25.578 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:25.578 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:25.836 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:26.093 19:23:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:29.384 19:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:29.384 19:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:29.384 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 288238 00:24:29.384 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 288238 ']' 00:24:29.384 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 288238 00:24:29.384 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:29.384 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.384 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 288238 00:24:29.384 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:29.384 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:29.384 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 288238' 00:24:29.384 killing process with pid 288238 00:24:29.384 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 288238 00:24:29.384 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 288238 00:24:29.643 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:29.643 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:29.903 rmmod nvme_tcp 00:24:29.903 rmmod nvme_fabrics 00:24:29.903 rmmod nvme_keyring 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 286048 ']' 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 286048 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 286048 ']' 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 286048 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 286048 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 286048' 00:24:29.903 killing process with pid 286048 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 286048 00:24:29.903 19:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 286048 00:24:30.163 19:23:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:30.163 19:23:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:30.163 19:23:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:30.163 19:23:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:30.163 19:23:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:30.163 19:23:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:30.163 19:23:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:30.163 19:23:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:30.163 19:23:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:30.163 19:23:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.163 19:23:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.163 19:23:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.702 19:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:32.702 00:24:32.702 real 0m35.794s 00:24:32.702 user 2m6.469s 00:24:32.703 sys 0m6.259s 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:32.703 ************************************ 00:24:32.703 END TEST nvmf_failover 00:24:32.703 ************************************ 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.703 ************************************ 00:24:32.703 START TEST nvmf_host_discovery 00:24:32.703 ************************************ 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:32.703 * Looking for test storage... 00:24:32.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:32.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.703 --rc genhtml_branch_coverage=1 00:24:32.703 --rc genhtml_function_coverage=1 00:24:32.703 --rc genhtml_legend=1 00:24:32.703 --rc geninfo_all_blocks=1 00:24:32.703 --rc geninfo_unexecuted_blocks=1 00:24:32.703 00:24:32.703 ' 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:32.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.703 --rc genhtml_branch_coverage=1 00:24:32.703 --rc genhtml_function_coverage=1 00:24:32.703 --rc genhtml_legend=1 00:24:32.703 --rc geninfo_all_blocks=1 00:24:32.703 --rc geninfo_unexecuted_blocks=1 00:24:32.703 00:24:32.703 ' 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:32.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.703 --rc genhtml_branch_coverage=1 00:24:32.703 --rc genhtml_function_coverage=1 00:24:32.703 --rc genhtml_legend=1 00:24:32.703 --rc geninfo_all_blocks=1 00:24:32.703 --rc geninfo_unexecuted_blocks=1 00:24:32.703 00:24:32.703 ' 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:32.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.703 --rc genhtml_branch_coverage=1 00:24:32.703 --rc genhtml_function_coverage=1 00:24:32.703 --rc genhtml_legend=1 00:24:32.703 --rc geninfo_all_blocks=1 00:24:32.703 --rc geninfo_unexecuted_blocks=1 00:24:32.703 00:24:32.703 ' 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:32.703 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:32.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:32.704 19:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:34.608 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:34.608 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.608 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:34.609 Found net devices under 0000:84:00.0: cvl_0_0 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:34.609 Found net devices under 0000:84:00.1: cvl_0_1 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:34.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:24:34.609 00:24:34.609 --- 10.0.0.2 ping statistics --- 00:24:34.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.609 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:24:34.609 00:24:34.609 --- 10.0.0.1 ping statistics --- 00:24:34.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.609 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=291662 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 291662 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 291662 ']' 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.609 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.609 [2024-12-06 19:23:19.568910] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:24:34.609 [2024-12-06 19:23:19.568980] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.866 [2024-12-06 19:23:19.659310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.866 [2024-12-06 19:23:19.729236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.867 [2024-12-06 19:23:19.729307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.867 [2024-12-06 19:23:19.729345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.867 [2024-12-06 19:23:19.729368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.867 [2024-12-06 19:23:19.729386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.867 [2024-12-06 19:23:19.730293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.124 [2024-12-06 19:23:19.945147] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.124 [2024-12-06 19:23:19.953363] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.124 null0 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.124 null1 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.124 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.125 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.125 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=291774 00:24:35.125 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:35.125 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 291774 /tmp/host.sock 00:24:35.125 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 291774 ']' 00:24:35.125 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:35.125 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.125 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:35.125 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:35.125 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.125 19:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.125 [2024-12-06 19:23:20.031509] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:24:35.125 [2024-12-06 19:23:20.031614] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291774 ] 00:24:35.125 [2024-12-06 19:23:20.098525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.125 [2024-12-06 19:23:20.155872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.383 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.384 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.644 [2024-12-06 19:23:20.558976] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:35.644 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:35.905 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:35.905 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:35.905 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:35.905 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.905 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.905 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:35.905 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:35.905 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.905 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:35.905 19:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:36.472 [2024-12-06 19:23:21.343387] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:36.472 [2024-12-06 19:23:21.343416] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:36.472 [2024-12-06 19:23:21.343439] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:36.472 [2024-12-06 19:23:21.471885] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:36.731 [2024-12-06 19:23:21.693069] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:36.731 [2024-12-06 19:23:21.694062] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x8d40d0:1 started. 00:24:36.731 [2024-12-06 19:23:21.695689] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:36.731 [2024-12-06 19:23:21.695731] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:36.731 [2024-12-06 19:23:21.701543] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x8d40d0 was disconnected and freed. delete nvme_qpair. 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.731 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:36.990 [2024-12-06 19:23:21.895594] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x8a2800:1 started. 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:36.990 [2024-12-06 19:23:21.902317] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x8a2800 was disconnected and freed. delete nvme_qpair. 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.990 [2024-12-06 19:23:21.974876] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:36.990 [2024-12-06 19:23:21.975238] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:36.990 [2024-12-06 19:23:21.975266] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:36.990 19:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.990 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.990 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:36.990 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:36.991 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:36.991 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:36.991 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:36.991 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:36.991 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:36.991 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.991 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.991 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.991 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.991 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:36.991 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:37.249 19:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:37.249 [2024-12-06 19:23:22.101037] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:37.249 [2024-12-06 19:23:22.247123] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:37.249 [2024-12-06 19:23:22.247167] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:37.249 [2024-12-06 19:23:22.247181] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:37.249 [2024-12-06 19:23:22.247188] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.191 [2024-12-06 19:23:23.190694] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:38.191 [2024-12-06 19:23:23.190754] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:38.191 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.192 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:38.192 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:38.192 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:38.192 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.192 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:38.192 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:38.192 [2024-12-06 19:23:23.196984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.192 [2024-12-06 19:23:23.197042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.192 [2024-12-06 19:23:23.197058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.192 [2024-12-06 19:23:23.197072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.192 [2024-12-06 19:23:23.197111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.192 [2024-12-06 19:23:23.197124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.192 [2024-12-06 19:23:23.197145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.192 [2024-12-06 19:23:23.197158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.192 [2024-12-06 19:23:23.197171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a4710 is same with the state(6) to be set 00:24:38.192 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:38.192 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:38.192 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.192 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:38.192 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.192 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:38.192 [2024-12-06 19:23:23.206989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a4710 (9): Bad file descriptor 00:24:38.192 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.192 [2024-12-06 19:23:23.217045] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:38.192 [2024-12-06 19:23:23.217077] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:38.192 [2024-12-06 19:23:23.217089] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:38.192 [2024-12-06 19:23:23.217099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:38.192 [2024-12-06 19:23:23.217148] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:38.192 [2024-12-06 19:23:23.217354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.192 [2024-12-06 19:23:23.217381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a4710 with addr=10.0.0.2, port=4420 00:24:38.192 [2024-12-06 19:23:23.217398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a4710 is same with the state(6) to be set 00:24:38.192 [2024-12-06 19:23:23.217420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a4710 (9): Bad file descriptor 00:24:38.192 [2024-12-06 19:23:23.217441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:38.192 [2024-12-06 19:23:23.217454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:38.192 [2024-12-06 19:23:23.217470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:38.192 [2024-12-06 19:23:23.217483] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:38.192 [2024-12-06 19:23:23.217493] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:38.192 [2024-12-06 19:23:23.217500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:38.192 [2024-12-06 19:23:23.227181] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:38.192 [2024-12-06 19:23:23.227211] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:38.192 [2024-12-06 19:23:23.227220] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:38.192 [2024-12-06 19:23:23.227227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:38.192 [2024-12-06 19:23:23.227267] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:38.192 [2024-12-06 19:23:23.227395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.192 [2024-12-06 19:23:23.227421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a4710 with addr=10.0.0.2, port=4420 00:24:38.192 [2024-12-06 19:23:23.227436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a4710 is same with the state(6) to be set 00:24:38.192 [2024-12-06 19:23:23.227457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a4710 (9): Bad file descriptor 00:24:38.192 [2024-12-06 19:23:23.227477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:38.192 [2024-12-06 19:23:23.227489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:38.192 [2024-12-06 19:23:23.227502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:38.192 [2024-12-06 19:23:23.227513] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:38.192 [2024-12-06 19:23:23.227521] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:38.192 [2024-12-06 19:23:23.227528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:38.192 [2024-12-06 19:23:23.237301] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:38.192 [2024-12-06 19:23:23.237323] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:38.192 [2024-12-06 19:23:23.237332] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:38.192 [2024-12-06 19:23:23.237339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:38.192 [2024-12-06 19:23:23.237380] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:38.192 [2024-12-06 19:23:23.237489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.192 [2024-12-06 19:23:23.237515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a4710 with addr=10.0.0.2, port=4420 00:24:38.192 [2024-12-06 19:23:23.237532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a4710 is same with the state(6) to be set 00:24:38.192 [2024-12-06 19:23:23.237552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a4710 (9): Bad file descriptor 00:24:38.192 [2024-12-06 19:23:23.237572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:38.192 [2024-12-06 19:23:23.237585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:38.192 [2024-12-06 19:23:23.237597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:38.192 [2024-12-06 19:23:23.237608] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:38.192 [2024-12-06 19:23:23.237617] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:38.192 [2024-12-06 19:23:23.237624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:38.192 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.192 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:38.193 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:38.193 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:38.193 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:38.193 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.193 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:38.453 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:38.453 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.453 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:38.453 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.453 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:38.453 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.453 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:38.453 [2024-12-06 19:23:23.247414] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:38.453 [2024-12-06 19:23:23.247436] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:38.453 [2024-12-06 19:23:23.247445] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:38.453 [2024-12-06 19:23:23.247452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:38.453 [2024-12-06 19:23:23.247491] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:38.453 [2024-12-06 19:23:23.247627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.453 [2024-12-06 19:23:23.247654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a4710 with addr=10.0.0.2, port=4420 00:24:38.453 [2024-12-06 19:23:23.247669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a4710 is same with the state(6) to be set 00:24:38.453 [2024-12-06 19:23:23.247690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a4710 (9): Bad file descriptor 00:24:38.453 [2024-12-06 19:23:23.247746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:38.453 [2024-12-06 19:23:23.247764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:38.453 [2024-12-06 19:23:23.247788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:38.453 [2024-12-06 19:23:23.247800] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:38.453 [2024-12-06 19:23:23.247808] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:38.454 [2024-12-06 19:23:23.247816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:38.454 [2024-12-06 19:23:23.257525] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:38.454 [2024-12-06 19:23:23.257545] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:38.454 [2024-12-06 19:23:23.257554] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:38.454 [2024-12-06 19:23:23.257561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:38.454 [2024-12-06 19:23:23.257601] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:38.454 [2024-12-06 19:23:23.257766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.454 [2024-12-06 19:23:23.257794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a4710 with addr=10.0.0.2, port=4420 00:24:38.454 [2024-12-06 19:23:23.257819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a4710 is same with the state(6) to be set 00:24:38.454 [2024-12-06 19:23:23.257842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a4710 (9): Bad file descriptor 00:24:38.454 [2024-12-06 19:23:23.257874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:38.454 [2024-12-06 19:23:23.257891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:38.454 [2024-12-06 19:23:23.257904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:38.454 [2024-12-06 19:23:23.257916] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:38.454 [2024-12-06 19:23:23.257925] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:38.454 [2024-12-06 19:23:23.257932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.454 [2024-12-06 19:23:23.267635] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:38.454 [2024-12-06 19:23:23.267655] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:38.454 [2024-12-06 19:23:23.267663] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:38.454 [2024-12-06 19:23:23.267670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:38.454 [2024-12-06 19:23:23.267710] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:38.454 [2024-12-06 19:23:23.267857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.454 [2024-12-06 19:23:23.267885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a4710 with addr=10.0.0.2, port=4420 00:24:38.454 [2024-12-06 19:23:23.267901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a4710 is same with the state(6) to be set 00:24:38.454 [2024-12-06 19:23:23.267922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a4710 (9): Bad file descriptor 00:24:38.454 [2024-12-06 19:23:23.267956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:38.454 [2024-12-06 19:23:23.267974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:38.454 [2024-12-06 19:23:23.267987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:38.454 [2024-12-06 19:23:23.268015] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:38.454 [2024-12-06 19:23:23.268023] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:38.454 [2024-12-06 19:23:23.268030] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:38.454 [2024-12-06 19:23:23.276983] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:38.454 [2024-12-06 19:23:23.277027] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:38.454 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.455 19:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.837 [2024-12-06 19:23:24.561841] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:39.837 [2024-12-06 19:23:24.561875] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:39.837 [2024-12-06 19:23:24.561899] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:39.837 [2024-12-06 19:23:24.649146] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:40.096 [2024-12-06 19:23:24.958762] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:40.096 [2024-12-06 19:23:24.959737] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x8b1560:1 started. 00:24:40.096 [2024-12-06 19:23:24.961907] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:40.096 [2024-12-06 19:23:24.961951] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:40.096 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.096 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:40.096 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:40.096 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:40.096 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:40.096 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.097 [2024-12-06 19:23:24.970629] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x8b1560 was disconnected and freed. delete nvme_qpair. 00:24:40.097 request: 00:24:40.097 { 00:24:40.097 "name": "nvme", 00:24:40.097 "trtype": "tcp", 00:24:40.097 "traddr": "10.0.0.2", 00:24:40.097 "adrfam": "ipv4", 00:24:40.097 "trsvcid": "8009", 00:24:40.097 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:40.097 "wait_for_attach": true, 00:24:40.097 "method": "bdev_nvme_start_discovery", 00:24:40.097 "req_id": 1 00:24:40.097 } 00:24:40.097 Got JSON-RPC error response 00:24:40.097 response: 00:24:40.097 { 00:24:40.097 "code": -17, 00:24:40.097 "message": "File exists" 00:24:40.097 } 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:40.097 19:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.097 request: 00:24:40.097 { 00:24:40.097 "name": "nvme_second", 00:24:40.097 "trtype": "tcp", 00:24:40.097 "traddr": "10.0.0.2", 00:24:40.097 "adrfam": "ipv4", 00:24:40.097 "trsvcid": "8009", 00:24:40.097 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:40.097 "wait_for_attach": true, 00:24:40.097 "method": "bdev_nvme_start_discovery", 00:24:40.097 "req_id": 1 00:24:40.097 } 00:24:40.097 Got JSON-RPC error response 00:24:40.097 response: 00:24:40.097 { 00:24:40.097 "code": -17, 00:24:40.097 "message": "File exists" 00:24:40.097 } 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:40.097 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.356 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:40.356 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:40.356 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:40.356 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:40.356 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:40.356 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.356 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:40.356 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.356 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:40.356 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.356 19:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.293 [2024-12-06 19:23:26.170233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.293 [2024-12-06 19:23:26.170323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8be1a0 with addr=10.0.0.2, port=8010 00:24:41.293 [2024-12-06 19:23:26.170356] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:41.293 [2024-12-06 19:23:26.170371] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:41.293 [2024-12-06 19:23:26.170383] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:42.231 [2024-12-06 19:23:27.172685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.231 [2024-12-06 19:23:27.172793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8be1a0 with addr=10.0.0.2, port=8010 00:24:42.231 [2024-12-06 19:23:27.172838] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:42.231 [2024-12-06 19:23:27.172853] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:42.231 [2024-12-06 19:23:27.172866] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:43.171 [2024-12-06 19:23:28.174755] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:43.171 request: 00:24:43.171 { 00:24:43.171 "name": "nvme_second", 00:24:43.171 "trtype": "tcp", 00:24:43.171 "traddr": "10.0.0.2", 00:24:43.171 "adrfam": "ipv4", 00:24:43.171 "trsvcid": "8010", 00:24:43.171 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:43.171 "wait_for_attach": false, 00:24:43.171 "attach_timeout_ms": 3000, 00:24:43.171 "method": "bdev_nvme_start_discovery", 00:24:43.171 "req_id": 1 00:24:43.171 } 00:24:43.171 Got JSON-RPC error response 00:24:43.171 response: 00:24:43.171 { 00:24:43.171 "code": -110, 00:24:43.171 "message": "Connection timed out" 00:24:43.171 } 00:24:43.171 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:43.171 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:43.171 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:43.171 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:43.171 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:43.171 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:43.171 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:43.171 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:43.171 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.171 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.171 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:43.171 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:43.171 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.432 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 291774 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:43.433 rmmod nvme_tcp 00:24:43.433 rmmod nvme_fabrics 00:24:43.433 rmmod nvme_keyring 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 291662 ']' 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 291662 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 291662 ']' 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 291662 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291662 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291662' 00:24:43.433 killing process with pid 291662 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 291662 00:24:43.433 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 291662 00:24:43.694 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:43.694 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:43.694 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:43.694 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:43.694 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:43.694 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:43.694 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:43.694 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.694 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:43.694 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.694 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.694 19:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.607 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:45.607 00:24:45.607 real 0m13.367s 00:24:45.607 user 0m19.356s 00:24:45.607 sys 0m2.831s 00:24:45.607 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.607 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.607 ************************************ 00:24:45.607 END TEST nvmf_host_discovery 00:24:45.607 ************************************ 00:24:45.607 19:23:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:45.607 19:23:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:45.607 19:23:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.607 19:23:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.866 ************************************ 00:24:45.866 START TEST nvmf_host_multipath_status 00:24:45.866 ************************************ 00:24:45.866 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:45.866 * Looking for test storage... 00:24:45.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:45.866 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:45.866 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:24:45.866 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:45.866 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:45.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.867 --rc genhtml_branch_coverage=1 00:24:45.867 --rc genhtml_function_coverage=1 00:24:45.867 --rc genhtml_legend=1 00:24:45.867 --rc geninfo_all_blocks=1 00:24:45.867 --rc geninfo_unexecuted_blocks=1 00:24:45.867 00:24:45.867 ' 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:45.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.867 --rc genhtml_branch_coverage=1 00:24:45.867 --rc genhtml_function_coverage=1 00:24:45.867 --rc genhtml_legend=1 00:24:45.867 --rc geninfo_all_blocks=1 00:24:45.867 --rc geninfo_unexecuted_blocks=1 00:24:45.867 00:24:45.867 ' 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:45.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.867 --rc genhtml_branch_coverage=1 00:24:45.867 --rc genhtml_function_coverage=1 00:24:45.867 --rc genhtml_legend=1 00:24:45.867 --rc geninfo_all_blocks=1 00:24:45.867 --rc geninfo_unexecuted_blocks=1 00:24:45.867 00:24:45.867 ' 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:45.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.867 --rc genhtml_branch_coverage=1 00:24:45.867 --rc genhtml_function_coverage=1 00:24:45.867 --rc genhtml_legend=1 00:24:45.867 --rc geninfo_all_blocks=1 00:24:45.867 --rc geninfo_unexecuted_blocks=1 00:24:45.867 00:24:45.867 ' 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:45.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:45.867 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:45.868 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:45.868 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:45.868 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:45.868 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:45.868 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:45.868 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.868 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:45.868 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:45.868 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:45.868 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.868 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.868 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.868 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:45.868 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:45.868 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:45.868 19:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:47.768 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.768 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:47.768 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:47.768 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:47.768 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:47.768 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:47.768 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:47.768 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:47.768 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:47.768 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:47.768 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:47.768 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:47.769 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:47.769 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:47.769 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:47.769 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.769 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.769 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.769 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.769 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.769 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.769 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.769 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:47.769 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.769 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.769 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.769 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:48.030 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:48.030 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:48.030 Found net devices under 0000:84:00.0: cvl_0_0 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:48.030 Found net devices under 0000:84:00.1: cvl_0_1 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:48.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:48.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:24:48.030 00:24:48.030 --- 10.0.0.2 ping statistics --- 00:24:48.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.030 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:48.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:48.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:24:48.030 00:24:48.030 --- 10.0.0.1 ping statistics --- 00:24:48.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.030 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:48.030 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=294865 00:24:48.031 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:48.031 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 294865 00:24:48.031 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 294865 ']' 00:24:48.031 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.031 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:48.031 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.031 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:48.031 19:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:48.031 [2024-12-06 19:23:33.030560] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:24:48.031 [2024-12-06 19:23:33.030668] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.289 [2024-12-06 19:23:33.104452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:48.289 [2024-12-06 19:23:33.163127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.289 [2024-12-06 19:23:33.163195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.289 [2024-12-06 19:23:33.163223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.289 [2024-12-06 19:23:33.163234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.289 [2024-12-06 19:23:33.163244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.289 [2024-12-06 19:23:33.164870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.289 [2024-12-06 19:23:33.164878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.289 19:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.289 19:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:48.289 19:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:48.289 19:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:48.289 19:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:48.289 19:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.289 19:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=294865 00:24:48.289 19:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:48.548 [2024-12-06 19:23:33.596729] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.806 19:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:49.064 Malloc0 00:24:49.065 19:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:49.322 19:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:49.580 19:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.838 [2024-12-06 19:23:34.719744] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.838 19:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:50.097 [2024-12-06 19:23:35.000469] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:50.097 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=295150 00:24:50.097 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:50.097 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:50.097 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 295150 /var/tmp/bdevperf.sock 00:24:50.097 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 295150 ']' 00:24:50.097 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:50.097 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.097 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:50.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:50.097 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.097 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:50.353 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:50.353 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:50.353 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:50.610 19:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:51.177 Nvme0n1 00:24:51.177 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:51.433 Nvme0n1 00:24:51.433 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:51.433 19:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:53.965 19:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:53.965 19:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:53.965 19:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:53.965 19:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:55.338 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:55.338 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:55.338 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.338 19:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:55.338 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.338 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:55.338 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.338 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:55.596 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:55.596 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:55.596 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.596 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:56.166 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.166 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:56.166 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.166 19:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:56.425 19:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.425 19:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:56.425 19:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.425 19:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:56.683 19:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.683 19:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:56.683 19:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.683 19:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:56.942 19:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.942 19:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:56.942 19:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:57.200 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:57.460 19:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:58.396 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:58.396 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:58.396 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.396 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:58.962 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:58.962 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:58.962 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.962 19:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:59.220 19:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.220 19:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:59.220 19:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.220 19:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:59.478 19:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.478 19:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:59.478 19:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.478 19:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:59.737 19:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.737 19:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:59.737 19:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.737 19:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:59.995 19:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.995 19:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:59.995 19:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.995 19:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:00.252 19:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.252 19:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:00.252 19:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:00.818 19:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:01.076 19:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:02.016 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:02.016 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:02.016 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.016 19:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:02.274 19:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.274 19:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:02.274 19:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.274 19:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:02.533 19:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:02.533 19:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:02.533 19:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.533 19:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:02.791 19:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.791 19:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:02.791 19:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.791 19:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:03.357 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.357 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:03.357 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.357 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:03.616 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.616 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:03.616 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.616 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:03.874 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.874 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:03.874 19:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:04.133 19:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:04.392 19:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:05.331 19:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:05.331 19:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:05.331 19:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.331 19:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:05.896 19:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.896 19:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:05.896 19:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.896 19:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:06.155 19:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:06.155 19:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:06.155 19:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.155 19:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:06.413 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.413 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:06.413 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.413 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:06.672 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.672 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:06.672 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.672 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:06.931 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.932 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:06.932 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.932 19:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:07.191 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:07.191 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:07.191 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:07.452 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:08.022 19:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:08.961 19:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:08.961 19:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:08.961 19:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.961 19:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:09.219 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:09.219 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:09.219 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.219 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:09.477 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:09.477 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:09.477 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.477 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:09.735 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.735 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:09.735 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.735 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:09.993 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.993 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:09.993 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.993 19:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:10.250 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:10.250 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:10.250 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.250 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:10.507 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:10.507 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:10.507 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:10.765 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:11.024 19:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:11.962 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:11.962 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:11.962 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.962 19:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:12.221 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:12.221 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:12.221 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.221 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:12.786 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.786 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:12.786 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.786 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:13.102 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.102 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:13.102 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.102 19:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:13.359 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.359 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:13.359 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.359 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:13.616 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:13.616 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:13.616 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.616 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:13.873 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.873 19:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:14.131 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:14.131 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:14.711 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:14.968 19:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:15.899 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:15.899 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:15.899 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.899 19:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:16.157 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.157 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:16.157 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.157 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:16.723 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.723 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:16.723 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.723 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:16.981 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.981 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:16.981 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.981 19:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:17.239 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.239 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:17.239 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.240 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:17.498 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.498 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:17.498 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.498 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:17.756 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.756 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:17.756 19:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:18.014 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:18.581 19:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:19.519 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:19.519 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:19.519 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.519 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:19.777 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:19.777 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:19.777 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.777 19:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:20.036 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.036 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:20.036 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.036 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:20.293 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.293 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:20.293 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.293 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:20.858 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.858 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:20.858 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.858 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:21.116 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.116 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:21.116 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.116 19:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:21.375 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.375 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:21.375 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:21.634 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:21.893 19:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:23.270 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:23.270 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:23.270 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.270 19:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:23.270 19:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.270 19:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:23.270 19:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.270 19:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:23.529 19:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.529 19:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:23.529 19:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.529 19:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:24.096 19:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.096 19:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:24.097 19:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.097 19:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:24.354 19:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.355 19:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:24.355 19:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.355 19:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:24.613 19:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.613 19:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:24.613 19:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.613 19:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:24.872 19:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.872 19:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:24.872 19:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:25.130 19:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:25.696 19:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:26.636 19:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:26.636 19:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:26.636 19:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.636 19:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:26.896 19:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.896 19:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:26.896 19:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.896 19:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:27.154 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:27.154 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:27.154 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.154 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:27.412 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.412 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:27.412 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.412 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:27.671 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.671 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:27.671 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.671 19:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:28.271 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.271 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:28.271 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.271 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:28.549 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:28.549 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 295150 00:25:28.549 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 295150 ']' 00:25:28.549 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 295150 00:25:28.549 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:28.549 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:28.549 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 295150 00:25:28.549 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:28.549 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:28.549 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 295150' 00:25:28.549 killing process with pid 295150 00:25:28.549 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 295150 00:25:28.549 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 295150 00:25:28.549 { 00:25:28.549 "results": [ 00:25:28.549 { 00:25:28.549 "job": "Nvme0n1", 00:25:28.549 "core_mask": "0x4", 00:25:28.549 "workload": "verify", 00:25:28.549 "status": "terminated", 00:25:28.549 "verify_range": { 00:25:28.549 "start": 0, 00:25:28.549 "length": 16384 00:25:28.549 }, 00:25:28.549 "queue_depth": 128, 00:25:28.549 "io_size": 4096, 00:25:28.549 "runtime": 36.792342, 00:25:28.549 "iops": 8418.137665713153, 00:25:28.549 "mibps": 32.883350256692005, 00:25:28.549 "io_failed": 0, 00:25:28.549 "io_timeout": 0, 00:25:28.549 "avg_latency_us": 15181.077436873402, 00:25:28.549 "min_latency_us": 442.9748148148148, 00:25:28.549 "max_latency_us": 4026531.84 00:25:28.549 } 00:25:28.549 ], 00:25:28.549 "core_count": 1 00:25:28.549 } 00:25:28.836 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 295150 00:25:28.836 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:28.836 [2024-12-06 19:23:35.062754] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:25:28.836 [2024-12-06 19:23:35.062835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid295150 ] 00:25:28.836 [2024-12-06 19:23:35.129493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.836 [2024-12-06 19:23:35.187024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.836 Running I/O for 90 seconds... 00:25:28.836 8823.00 IOPS, 34.46 MiB/s [2024-12-06T18:24:13.885Z] 9038.50 IOPS, 35.31 MiB/s [2024-12-06T18:24:13.885Z] 9026.67 IOPS, 35.26 MiB/s [2024-12-06T18:24:13.885Z] 8997.00 IOPS, 35.14 MiB/s [2024-12-06T18:24:13.885Z] 8980.00 IOPS, 35.08 MiB/s [2024-12-06T18:24:13.885Z] 8950.00 IOPS, 34.96 MiB/s [2024-12-06T18:24:13.885Z] 8961.00 IOPS, 35.00 MiB/s [2024-12-06T18:24:13.885Z] 8961.00 IOPS, 35.00 MiB/s [2024-12-06T18:24:13.885Z] 8950.89 IOPS, 34.96 MiB/s [2024-12-06T18:24:13.885Z] 8946.50 IOPS, 34.95 MiB/s [2024-12-06T18:24:13.885Z] 8931.91 IOPS, 34.89 MiB/s [2024-12-06T18:24:13.885Z] 8916.50 IOPS, 34.83 MiB/s [2024-12-06T18:24:13.885Z] 8906.92 IOPS, 34.79 MiB/s [2024-12-06T18:24:13.885Z] 8898.79 IOPS, 34.76 MiB/s [2024-12-06T18:24:13.885Z] 8875.93 IOPS, 34.67 MiB/s [2024-12-06T18:24:13.886Z] [2024-12-06 19:23:52.477573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.477674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.477752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.477774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.477798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.477814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.477836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.477852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.477873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.477889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.477909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.477925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.477946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.477962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.477983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.477999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.479341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.479367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.479426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.479444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.479467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.479483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.479506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.479521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.479544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.479559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.479582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.479596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.479619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.479634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.479657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.479672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.479695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.837 [2024-12-06 19:23:52.479709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.479742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.837 [2024-12-06 19:23:52.479760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.479783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.837 [2024-12-06 19:23:52.479798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.479820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.837 [2024-12-06 19:23:52.479835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.479858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.837 [2024-12-06 19:23:52.479873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.479896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.837 [2024-12-06 19:23:52.479916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.479939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.837 [2024-12-06 19:23:52.479954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.479977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.837 [2024-12-06 19:23:52.479992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:28.837 [2024-12-06 19:23:52.480783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.837 [2024-12-06 19:23:52.480798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.480821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.480836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.480863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.480879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.480902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.480918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.838 [2024-12-06 19:23:52.481413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.481973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.481998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.838 [2024-12-06 19:23:52.482748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.838 [2024-12-06 19:23:52.482791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.838 [2024-12-06 19:23:52.482834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.838 [2024-12-06 19:23:52.482876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.838 [2024-12-06 19:23:52.482917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.838 [2024-12-06 19:23:52.482960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:28.838 [2024-12-06 19:23:52.482986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:23:52.483002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:23:52.483029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:23:52.483045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:28.839 8850.38 IOPS, 34.57 MiB/s [2024-12-06T18:24:13.888Z] 8329.76 IOPS, 32.54 MiB/s [2024-12-06T18:24:13.888Z] 7867.00 IOPS, 30.73 MiB/s [2024-12-06T18:24:13.888Z] 7452.95 IOPS, 29.11 MiB/s [2024-12-06T18:24:13.888Z] 7101.95 IOPS, 27.74 MiB/s [2024-12-06T18:24:13.888Z] 7186.29 IOPS, 28.07 MiB/s [2024-12-06T18:24:13.888Z] 7271.18 IOPS, 28.40 MiB/s [2024-12-06T18:24:13.888Z] 7340.04 IOPS, 28.67 MiB/s [2024-12-06T18:24:13.888Z] 7520.29 IOPS, 29.38 MiB/s [2024-12-06T18:24:13.888Z] 7675.04 IOPS, 29.98 MiB/s [2024-12-06T18:24:13.888Z] 7818.62 IOPS, 30.54 MiB/s [2024-12-06T18:24:13.888Z] 7905.48 IOPS, 30.88 MiB/s [2024-12-06T18:24:13.888Z] 7951.04 IOPS, 31.06 MiB/s [2024-12-06T18:24:13.888Z] 7989.24 IOPS, 31.21 MiB/s [2024-12-06T18:24:13.888Z] 8015.27 IOPS, 31.31 MiB/s [2024-12-06T18:24:13.888Z] 8100.03 IOPS, 31.64 MiB/s [2024-12-06T18:24:13.888Z] 8208.94 IOPS, 32.07 MiB/s [2024-12-06T18:24:13.888Z] 8309.18 IOPS, 32.46 MiB/s [2024-12-06T18:24:13.888Z] [2024-12-06 19:24:10.451655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.839 [2024-12-06 19:24:10.451754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.839 [2024-12-06 19:24:10.452169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.452213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.452279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.452327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.452365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.452403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.452445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.452483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.452520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.839 [2024-12-06 19:24:10.452584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.839 [2024-12-06 19:24:10.452623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.839 [2024-12-06 19:24:10.452659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.839 [2024-12-06 19:24:10.452696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.839 [2024-12-06 19:24:10.452773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.839 [2024-12-06 19:24:10.452811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.839 [2024-12-06 19:24:10.452849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.452886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.452922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.452960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.452981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.452997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.453586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.839 [2024-12-06 19:24:10.453610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.453636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.839 [2024-12-06 19:24:10.453658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.453680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.839 [2024-12-06 19:24:10.453710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.453744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.839 [2024-12-06 19:24:10.453762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.453784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.839 [2024-12-06 19:24:10.453800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.453823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.839 [2024-12-06 19:24:10.453839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.453861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.453877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.453898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.453915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.453937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.453953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.453974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.453990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.454028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.454044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.454079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:43144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.454095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.454116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:43176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.454131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.454151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.454166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.454191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.454207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.454227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.454243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.454263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.454278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.454298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.454313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.454333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.454348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.454369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.454384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.454404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.454419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.454439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.839 [2024-12-06 19:24:10.454454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.839 [2024-12-06 19:24:10.454475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.840 [2024-12-06 19:24:10.454490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:28.840 [2024-12-06 19:24:10.454510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.840 [2024-12-06 19:24:10.454525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:28.840 [2024-12-06 19:24:10.454545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.840 [2024-12-06 19:24:10.454560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:28.840 [2024-12-06 19:24:10.454580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.840 [2024-12-06 19:24:10.454595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:28.840 [2024-12-06 19:24:10.454619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.840 [2024-12-06 19:24:10.454635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:28.840 [2024-12-06 19:24:10.454655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.840 [2024-12-06 19:24:10.454670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:28.840 [2024-12-06 19:24:10.454690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.840 [2024-12-06 19:24:10.454741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:28.840 [2024-12-06 19:24:10.454766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.840 [2024-12-06 19:24:10.454783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:28.840 [2024-12-06 19:24:10.454804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.840 [2024-12-06 19:24:10.454820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:28.840 [2024-12-06 19:24:10.454840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.840 [2024-12-06 19:24:10.454856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:28.840 [2024-12-06 19:24:10.454877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.840 [2024-12-06 19:24:10.454893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:28.840 [2024-12-06 19:24:10.454913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.840 [2024-12-06 19:24:10.454929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:28.840 [2024-12-06 19:24:10.454950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.840 [2024-12-06 19:24:10.454965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:28.840 [2024-12-06 19:24:10.454986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.840 [2024-12-06 19:24:10.455001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.840 [2024-12-06 19:24:10.455046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.840 [2024-12-06 19:24:10.455062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:28.840 8387.26 IOPS, 32.76 MiB/s [2024-12-06T18:24:13.889Z] 8402.49 IOPS, 32.82 MiB/s [2024-12-06T18:24:13.889Z] 8415.72 IOPS, 32.87 MiB/s [2024-12-06T18:24:13.889Z] Received shutdown signal, test time was about 36.793155 seconds 00:25:28.840 00:25:28.840 Latency(us) 00:25:28.840 [2024-12-06T18:24:13.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.840 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:28.840 Verification LBA range: start 0x0 length 0x4000 00:25:28.840 Nvme0n1 : 36.79 8418.14 32.88 0.00 0.00 15181.08 442.97 4026531.84 00:25:28.840 [2024-12-06T18:24:13.889Z] =================================================================================================================== 00:25:28.840 [2024-12-06T18:24:13.889Z] Total : 8418.14 32.88 0.00 0.00 15181.08 442.97 4026531.84 00:25:28.840 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:29.129 rmmod nvme_tcp 00:25:29.129 rmmod nvme_fabrics 00:25:29.129 rmmod nvme_keyring 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 294865 ']' 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 294865 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 294865 ']' 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 294865 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 294865 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 294865' 00:25:29.129 killing process with pid 294865 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 294865 00:25:29.129 19:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 294865 00:25:29.414 19:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:29.414 19:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:29.414 19:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:29.414 19:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:29.414 19:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:29.414 19:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:29.414 19:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:29.414 19:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:29.414 19:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:29.414 19:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.414 19:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.414 19:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.343 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:31.343 00:25:31.343 real 0m45.574s 00:25:31.343 user 2m19.788s 00:25:31.343 sys 0m12.172s 00:25:31.343 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:31.343 19:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:31.343 ************************************ 00:25:31.343 END TEST nvmf_host_multipath_status 00:25:31.343 ************************************ 00:25:31.343 19:24:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:31.343 19:24:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:31.343 19:24:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:31.343 19:24:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.343 ************************************ 00:25:31.343 START TEST nvmf_discovery_remove_ifc 00:25:31.343 ************************************ 00:25:31.343 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:31.343 * Looking for test storage... 00:25:31.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:31.343 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:31.343 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:25:31.343 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:31.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.603 --rc genhtml_branch_coverage=1 00:25:31.603 --rc genhtml_function_coverage=1 00:25:31.603 --rc genhtml_legend=1 00:25:31.603 --rc geninfo_all_blocks=1 00:25:31.603 --rc geninfo_unexecuted_blocks=1 00:25:31.603 00:25:31.603 ' 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:31.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.603 --rc genhtml_branch_coverage=1 00:25:31.603 --rc genhtml_function_coverage=1 00:25:31.603 --rc genhtml_legend=1 00:25:31.603 --rc geninfo_all_blocks=1 00:25:31.603 --rc geninfo_unexecuted_blocks=1 00:25:31.603 00:25:31.603 ' 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:31.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.603 --rc genhtml_branch_coverage=1 00:25:31.603 --rc genhtml_function_coverage=1 00:25:31.603 --rc genhtml_legend=1 00:25:31.603 --rc geninfo_all_blocks=1 00:25:31.603 --rc geninfo_unexecuted_blocks=1 00:25:31.603 00:25:31.603 ' 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:31.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.603 --rc genhtml_branch_coverage=1 00:25:31.603 --rc genhtml_function_coverage=1 00:25:31.603 --rc genhtml_legend=1 00:25:31.603 --rc geninfo_all_blocks=1 00:25:31.603 --rc geninfo_unexecuted_blocks=1 00:25:31.603 00:25:31.603 ' 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.603 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:31.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:31.604 19:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:33.502 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:33.502 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:33.502 Found net devices under 0000:84:00.0: cvl_0_0 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.502 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:33.503 Found net devices under 0000:84:00.1: cvl_0_1 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.503 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:33.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:25:33.759 00:25:33.759 --- 10.0.0.2 ping statistics --- 00:25:33.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.759 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:25:33.759 00:25:33.759 --- 10.0.0.1 ping statistics --- 00:25:33.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.759 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=302414 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 302414 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 302414 ']' 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.759 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.759 [2024-12-06 19:24:18.741271] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:25:33.759 [2024-12-06 19:24:18.741348] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.016 [2024-12-06 19:24:18.816248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.016 [2024-12-06 19:24:18.869139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.016 [2024-12-06 19:24:18.869200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.017 [2024-12-06 19:24:18.869226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.017 [2024-12-06 19:24:18.869237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.017 [2024-12-06 19:24:18.869247] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.017 [2024-12-06 19:24:18.869864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.017 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:34.017 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:34.017 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:34.017 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:34.017 19:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:34.017 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:34.017 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:34.017 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.017 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:34.017 [2024-12-06 19:24:19.015879] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.017 [2024-12-06 19:24:19.024099] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:34.017 null0 00:25:34.017 [2024-12-06 19:24:19.055989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.274 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.274 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=302543 00:25:34.274 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:34.274 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 302543 /tmp/host.sock 00:25:34.274 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 302543 ']' 00:25:34.274 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:34.274 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:34.274 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:34.274 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:34.274 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:34.274 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:34.274 [2024-12-06 19:24:19.126271] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:25:34.274 [2024-12-06 19:24:19.126347] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302543 ] 00:25:34.275 [2024-12-06 19:24:19.192976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.275 [2024-12-06 19:24:19.250505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.532 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:34.532 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:34.532 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:34.532 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:34.532 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.532 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:34.532 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.532 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:34.532 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.532 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:34.532 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.532 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:34.532 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.532 19:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:35.468 [2024-12-06 19:24:20.514881] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:35.468 [2024-12-06 19:24:20.514934] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:35.468 [2024-12-06 19:24:20.514959] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:35.727 [2024-12-06 19:24:20.602243] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:35.986 [2024-12-06 19:24:20.784441] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:35.986 [2024-12-06 19:24:20.785530] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x5d30d0:1 started. 00:25:35.986 [2024-12-06 19:24:20.787223] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:35.986 [2024-12-06 19:24:20.787279] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:35.986 [2024-12-06 19:24:20.787320] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:35.986 [2024-12-06 19:24:20.787344] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:35.986 [2024-12-06 19:24:20.787379] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:35.986 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.986 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:35.986 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:35.986 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.987 [2024-12-06 19:24:20.832338] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x5d30d0 was disconnected and freed. delete nvme_qpair. 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:35.987 19:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:36.924 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:36.924 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.924 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:36.924 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.924 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:36.924 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:36.924 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:36.924 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.924 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:36.924 19:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:38.304 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:38.304 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.304 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:38.304 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.304 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:38.304 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:38.304 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:38.304 19:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.304 19:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:38.304 19:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:39.244 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:39.244 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.244 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:39.244 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.244 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:39.244 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:39.244 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:39.244 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.244 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:39.244 19:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:40.182 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:40.182 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.182 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:40.182 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.182 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:40.182 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:40.182 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:40.182 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.182 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:40.182 19:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:41.119 19:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:41.119 19:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.119 19:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:41.119 19:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.119 19:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:41.119 19:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.119 19:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:41.119 19:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.119 19:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:41.119 19:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:41.379 [2024-12-06 19:24:26.228572] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:41.379 [2024-12-06 19:24:26.228667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.379 [2024-12-06 19:24:26.228691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.379 [2024-12-06 19:24:26.228711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.379 [2024-12-06 19:24:26.228747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.379 [2024-12-06 19:24:26.228764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.379 [2024-12-06 19:24:26.228778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.379 [2024-12-06 19:24:26.228793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.379 [2024-12-06 19:24:26.228806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.379 [2024-12-06 19:24:26.228820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.379 [2024-12-06 19:24:26.228833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.379 [2024-12-06 19:24:26.228847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5af9e0 is same with the state(6) to be set 00:25:41.379 [2024-12-06 19:24:26.238590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5af9e0 (9): Bad file descriptor 00:25:41.379 [2024-12-06 19:24:26.248632] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:41.379 [2024-12-06 19:24:26.248654] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:41.379 [2024-12-06 19:24:26.248673] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:41.379 [2024-12-06 19:24:26.248683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:41.379 [2024-12-06 19:24:26.248750] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:42.313 19:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:42.313 19:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.313 19:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:42.313 19:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.313 19:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:42.313 19:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.313 19:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:42.313 [2024-12-06 19:24:27.292758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:42.313 [2024-12-06 19:24:27.292832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5af9e0 with addr=10.0.0.2, port=4420 00:25:42.313 [2024-12-06 19:24:27.292857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5af9e0 is same with the state(6) to be set 00:25:42.313 [2024-12-06 19:24:27.292902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5af9e0 (9): Bad file descriptor 00:25:42.313 [2024-12-06 19:24:27.293316] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:42.314 [2024-12-06 19:24:27.293359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:42.314 [2024-12-06 19:24:27.293377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:42.314 [2024-12-06 19:24:27.293393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:42.314 [2024-12-06 19:24:27.293407] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:42.314 [2024-12-06 19:24:27.293418] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:42.314 [2024-12-06 19:24:27.293425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.314 [2024-12-06 19:24:27.293439] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:42.314 [2024-12-06 19:24:27.293448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:42.314 19:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.314 19:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:42.314 19:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:43.249 [2024-12-06 19:24:28.295935] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:43.249 [2024-12-06 19:24:28.295960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:43.249 [2024-12-06 19:24:28.295977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:43.249 [2024-12-06 19:24:28.296004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:43.249 [2024-12-06 19:24:28.296017] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:43.249 [2024-12-06 19:24:28.296039] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:43.249 [2024-12-06 19:24:28.296063] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:43.249 [2024-12-06 19:24:28.296071] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:43.249 [2024-12-06 19:24:28.296112] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:43.249 [2024-12-06 19:24:28.296149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.249 [2024-12-06 19:24:28.296170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.249 [2024-12-06 19:24:28.296189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.249 [2024-12-06 19:24:28.296209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.249 [2024-12-06 19:24:28.296221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.249 [2024-12-06 19:24:28.296234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.249 [2024-12-06 19:24:28.296247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.249 [2024-12-06 19:24:28.296259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.249 [2024-12-06 19:24:28.296271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.249 [2024-12-06 19:24:28.296283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.249 [2024-12-06 19:24:28.296296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:43.249 [2024-12-06 19:24:28.296381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59ed20 (9): Bad file descriptor 00:25:43.249 [2024-12-06 19:24:28.297378] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:43.249 [2024-12-06 19:24:28.297398] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:43.508 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:43.508 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.508 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:43.508 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.508 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:43.508 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.508 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:43.508 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.508 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:43.508 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:43.508 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:43.508 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:43.508 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:43.509 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.509 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.509 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:43.509 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:43.509 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.509 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:43.509 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.509 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:43.509 19:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:44.447 19:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:44.447 19:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.447 19:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:44.447 19:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.447 19:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:44.447 19:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:44.447 19:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:44.447 19:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.447 19:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:44.447 19:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:45.385 [2024-12-06 19:24:30.349447] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:45.385 [2024-12-06 19:24:30.349479] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:45.385 [2024-12-06 19:24:30.349504] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:45.644 [2024-12-06 19:24:30.477943] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:45.644 19:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:45.644 19:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.644 19:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:45.644 19:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.644 19:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:45.644 19:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:45.644 19:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:45.644 19:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.644 19:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:45.644 19:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:45.644 [2024-12-06 19:24:30.577749] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:45.644 [2024-12-06 19:24:30.578500] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x5ba1a0:1 started. 00:25:45.645 [2024-12-06 19:24:30.579798] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:45.645 [2024-12-06 19:24:30.579842] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:45.645 [2024-12-06 19:24:30.579874] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:45.645 [2024-12-06 19:24:30.579897] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:45.645 [2024-12-06 19:24:30.579910] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:45.645 [2024-12-06 19:24:30.587647] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x5ba1a0 was disconnected and freed. delete nvme_qpair. 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 302543 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 302543 ']' 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 302543 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 302543 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 302543' 00:25:46.580 killing process with pid 302543 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 302543 00:25:46.580 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 302543 00:25:46.839 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:46.839 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:46.839 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:46.839 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:46.839 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:46.839 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:46.839 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:46.839 rmmod nvme_tcp 00:25:46.839 rmmod nvme_fabrics 00:25:46.839 rmmod nvme_keyring 00:25:46.839 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:47.098 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:47.098 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:47.098 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 302414 ']' 00:25:47.098 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 302414 00:25:47.098 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 302414 ']' 00:25:47.098 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 302414 00:25:47.098 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:47.098 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:47.098 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 302414 00:25:47.098 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:47.098 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:47.098 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 302414' 00:25:47.098 killing process with pid 302414 00:25:47.099 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 302414 00:25:47.099 19:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 302414 00:25:47.357 19:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:47.357 19:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:47.357 19:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:47.357 19:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:47.357 19:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:47.357 19:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:47.357 19:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:47.357 19:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:47.357 19:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:47.357 19:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.357 19:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.357 19:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.275 19:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:49.275 00:25:49.275 real 0m17.920s 00:25:49.275 user 0m25.944s 00:25:49.275 sys 0m3.081s 00:25:49.275 19:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:49.275 19:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:49.275 ************************************ 00:25:49.275 END TEST nvmf_discovery_remove_ifc 00:25:49.275 ************************************ 00:25:49.275 19:24:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:49.275 19:24:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:49.275 19:24:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:49.275 19:24:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.275 ************************************ 00:25:49.275 START TEST nvmf_identify_kernel_target 00:25:49.275 ************************************ 00:25:49.275 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:49.275 * Looking for test storage... 00:25:49.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:49.275 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:49.275 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:25:49.275 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:49.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.534 --rc genhtml_branch_coverage=1 00:25:49.534 --rc genhtml_function_coverage=1 00:25:49.534 --rc genhtml_legend=1 00:25:49.534 --rc geninfo_all_blocks=1 00:25:49.534 --rc geninfo_unexecuted_blocks=1 00:25:49.534 00:25:49.534 ' 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:49.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.534 --rc genhtml_branch_coverage=1 00:25:49.534 --rc genhtml_function_coverage=1 00:25:49.534 --rc genhtml_legend=1 00:25:49.534 --rc geninfo_all_blocks=1 00:25:49.534 --rc geninfo_unexecuted_blocks=1 00:25:49.534 00:25:49.534 ' 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:49.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.534 --rc genhtml_branch_coverage=1 00:25:49.534 --rc genhtml_function_coverage=1 00:25:49.534 --rc genhtml_legend=1 00:25:49.534 --rc geninfo_all_blocks=1 00:25:49.534 --rc geninfo_unexecuted_blocks=1 00:25:49.534 00:25:49.534 ' 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:49.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.534 --rc genhtml_branch_coverage=1 00:25:49.534 --rc genhtml_function_coverage=1 00:25:49.534 --rc genhtml_legend=1 00:25:49.534 --rc geninfo_all_blocks=1 00:25:49.534 --rc geninfo_unexecuted_blocks=1 00:25:49.534 00:25:49.534 ' 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:49.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:49.534 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:49.535 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:49.535 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:49.535 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:49.535 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:49.535 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:49.535 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:49.535 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:49.535 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.535 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:49.535 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.535 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:49.535 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:49.535 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:49.535 19:24:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:51.442 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:51.443 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:51.443 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:51.443 Found net devices under 0000:84:00.0: cvl_0_0 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:51.443 Found net devices under 0000:84:00.1: cvl_0_1 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:51.443 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:51.701 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:51.701 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:51.701 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:51.701 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:51.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:51.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:25:51.701 00:25:51.701 --- 10.0.0.2 ping statistics --- 00:25:51.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.702 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:51.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:51.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:25:51.702 00:25:51.702 --- 10.0.0.1 ping statistics --- 00:25:51.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.702 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:51.702 19:24:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:53.080 Waiting for block devices as requested 00:25:53.080 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:25:53.080 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:53.080 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:53.080 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:53.340 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:53.340 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:53.340 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:53.340 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:53.340 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:53.600 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:53.600 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:53.600 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:53.859 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:53.859 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:53.859 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:53.859 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:54.117 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:54.117 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:54.117 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:54.117 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:54.117 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:54.117 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:54.117 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:54.117 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:54.117 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:54.117 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:54.117 No valid GPT data, bailing 00:25:54.117 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:54.117 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:54.117 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:54.117 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:54.117 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:54.117 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:54.117 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:54.117 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:54.375 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:54.375 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:54.375 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:54.375 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:54.375 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:54.375 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:54.375 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:54.375 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:54.375 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:54.375 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:25:54.375 00:25:54.375 Discovery Log Number of Records 2, Generation counter 2 00:25:54.375 =====Discovery Log Entry 0====== 00:25:54.375 trtype: tcp 00:25:54.375 adrfam: ipv4 00:25:54.375 subtype: current discovery subsystem 00:25:54.375 treq: not specified, sq flow control disable supported 00:25:54.375 portid: 1 00:25:54.375 trsvcid: 4420 00:25:54.375 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:54.375 traddr: 10.0.0.1 00:25:54.375 eflags: none 00:25:54.375 sectype: none 00:25:54.375 =====Discovery Log Entry 1====== 00:25:54.375 trtype: tcp 00:25:54.375 adrfam: ipv4 00:25:54.375 subtype: nvme subsystem 00:25:54.375 treq: not specified, sq flow control disable supported 00:25:54.375 portid: 1 00:25:54.375 trsvcid: 4420 00:25:54.375 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:54.375 traddr: 10.0.0.1 00:25:54.375 eflags: none 00:25:54.375 sectype: none 00:25:54.375 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:54.375 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:54.375 ===================================================== 00:25:54.375 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:54.375 ===================================================== 00:25:54.375 Controller Capabilities/Features 00:25:54.375 ================================ 00:25:54.375 Vendor ID: 0000 00:25:54.375 Subsystem Vendor ID: 0000 00:25:54.375 Serial Number: 078b449abbba47861f50 00:25:54.375 Model Number: Linux 00:25:54.375 Firmware Version: 6.8.9-20 00:25:54.375 Recommended Arb Burst: 0 00:25:54.375 IEEE OUI Identifier: 00 00 00 00:25:54.375 Multi-path I/O 00:25:54.375 May have multiple subsystem ports: No 00:25:54.375 May have multiple controllers: No 00:25:54.375 Associated with SR-IOV VF: No 00:25:54.375 Max Data Transfer Size: Unlimited 00:25:54.376 Max Number of Namespaces: 0 00:25:54.376 Max Number of I/O Queues: 1024 00:25:54.376 NVMe Specification Version (VS): 1.3 00:25:54.376 NVMe Specification Version (Identify): 1.3 00:25:54.376 Maximum Queue Entries: 1024 00:25:54.376 Contiguous Queues Required: No 00:25:54.376 Arbitration Mechanisms Supported 00:25:54.376 Weighted Round Robin: Not Supported 00:25:54.376 Vendor Specific: Not Supported 00:25:54.376 Reset Timeout: 7500 ms 00:25:54.376 Doorbell Stride: 4 bytes 00:25:54.376 NVM Subsystem Reset: Not Supported 00:25:54.376 Command Sets Supported 00:25:54.376 NVM Command Set: Supported 00:25:54.376 Boot Partition: Not Supported 00:25:54.376 Memory Page Size Minimum: 4096 bytes 00:25:54.376 Memory Page Size Maximum: 4096 bytes 00:25:54.376 Persistent Memory Region: Not Supported 00:25:54.376 Optional Asynchronous Events Supported 00:25:54.376 Namespace Attribute Notices: Not Supported 00:25:54.376 Firmware Activation Notices: Not Supported 00:25:54.376 ANA Change Notices: Not Supported 00:25:54.376 PLE Aggregate Log Change Notices: Not Supported 00:25:54.376 LBA Status Info Alert Notices: Not Supported 00:25:54.376 EGE Aggregate Log Change Notices: Not Supported 00:25:54.376 Normal NVM Subsystem Shutdown event: Not Supported 00:25:54.376 Zone Descriptor Change Notices: Not Supported 00:25:54.376 Discovery Log Change Notices: Supported 00:25:54.376 Controller Attributes 00:25:54.376 128-bit Host Identifier: Not Supported 00:25:54.376 Non-Operational Permissive Mode: Not Supported 00:25:54.376 NVM Sets: Not Supported 00:25:54.376 Read Recovery Levels: Not Supported 00:25:54.376 Endurance Groups: Not Supported 00:25:54.376 Predictable Latency Mode: Not Supported 00:25:54.376 Traffic Based Keep ALive: Not Supported 00:25:54.376 Namespace Granularity: Not Supported 00:25:54.376 SQ Associations: Not Supported 00:25:54.376 UUID List: Not Supported 00:25:54.376 Multi-Domain Subsystem: Not Supported 00:25:54.376 Fixed Capacity Management: Not Supported 00:25:54.376 Variable Capacity Management: Not Supported 00:25:54.376 Delete Endurance Group: Not Supported 00:25:54.376 Delete NVM Set: Not Supported 00:25:54.376 Extended LBA Formats Supported: Not Supported 00:25:54.376 Flexible Data Placement Supported: Not Supported 00:25:54.376 00:25:54.376 Controller Memory Buffer Support 00:25:54.376 ================================ 00:25:54.376 Supported: No 00:25:54.376 00:25:54.376 Persistent Memory Region Support 00:25:54.376 ================================ 00:25:54.376 Supported: No 00:25:54.376 00:25:54.376 Admin Command Set Attributes 00:25:54.376 ============================ 00:25:54.376 Security Send/Receive: Not Supported 00:25:54.376 Format NVM: Not Supported 00:25:54.376 Firmware Activate/Download: Not Supported 00:25:54.376 Namespace Management: Not Supported 00:25:54.376 Device Self-Test: Not Supported 00:25:54.376 Directives: Not Supported 00:25:54.376 NVMe-MI: Not Supported 00:25:54.376 Virtualization Management: Not Supported 00:25:54.376 Doorbell Buffer Config: Not Supported 00:25:54.376 Get LBA Status Capability: Not Supported 00:25:54.376 Command & Feature Lockdown Capability: Not Supported 00:25:54.376 Abort Command Limit: 1 00:25:54.376 Async Event Request Limit: 1 00:25:54.376 Number of Firmware Slots: N/A 00:25:54.376 Firmware Slot 1 Read-Only: N/A 00:25:54.376 Firmware Activation Without Reset: N/A 00:25:54.376 Multiple Update Detection Support: N/A 00:25:54.376 Firmware Update Granularity: No Information Provided 00:25:54.376 Per-Namespace SMART Log: No 00:25:54.376 Asymmetric Namespace Access Log Page: Not Supported 00:25:54.376 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:54.376 Command Effects Log Page: Not Supported 00:25:54.376 Get Log Page Extended Data: Supported 00:25:54.376 Telemetry Log Pages: Not Supported 00:25:54.376 Persistent Event Log Pages: Not Supported 00:25:54.376 Supported Log Pages Log Page: May Support 00:25:54.376 Commands Supported & Effects Log Page: Not Supported 00:25:54.376 Feature Identifiers & Effects Log Page:May Support 00:25:54.376 NVMe-MI Commands & Effects Log Page: May Support 00:25:54.376 Data Area 4 for Telemetry Log: Not Supported 00:25:54.376 Error Log Page Entries Supported: 1 00:25:54.376 Keep Alive: Not Supported 00:25:54.376 00:25:54.376 NVM Command Set Attributes 00:25:54.376 ========================== 00:25:54.376 Submission Queue Entry Size 00:25:54.376 Max: 1 00:25:54.376 Min: 1 00:25:54.376 Completion Queue Entry Size 00:25:54.376 Max: 1 00:25:54.376 Min: 1 00:25:54.376 Number of Namespaces: 0 00:25:54.376 Compare Command: Not Supported 00:25:54.376 Write Uncorrectable Command: Not Supported 00:25:54.376 Dataset Management Command: Not Supported 00:25:54.376 Write Zeroes Command: Not Supported 00:25:54.376 Set Features Save Field: Not Supported 00:25:54.376 Reservations: Not Supported 00:25:54.376 Timestamp: Not Supported 00:25:54.376 Copy: Not Supported 00:25:54.376 Volatile Write Cache: Not Present 00:25:54.376 Atomic Write Unit (Normal): 1 00:25:54.376 Atomic Write Unit (PFail): 1 00:25:54.376 Atomic Compare & Write Unit: 1 00:25:54.376 Fused Compare & Write: Not Supported 00:25:54.376 Scatter-Gather List 00:25:54.376 SGL Command Set: Supported 00:25:54.376 SGL Keyed: Not Supported 00:25:54.376 SGL Bit Bucket Descriptor: Not Supported 00:25:54.376 SGL Metadata Pointer: Not Supported 00:25:54.376 Oversized SGL: Not Supported 00:25:54.376 SGL Metadata Address: Not Supported 00:25:54.376 SGL Offset: Supported 00:25:54.376 Transport SGL Data Block: Not Supported 00:25:54.376 Replay Protected Memory Block: Not Supported 00:25:54.376 00:25:54.376 Firmware Slot Information 00:25:54.376 ========================= 00:25:54.376 Active slot: 0 00:25:54.376 00:25:54.376 00:25:54.376 Error Log 00:25:54.376 ========= 00:25:54.376 00:25:54.376 Active Namespaces 00:25:54.376 ================= 00:25:54.376 Discovery Log Page 00:25:54.376 ================== 00:25:54.376 Generation Counter: 2 00:25:54.376 Number of Records: 2 00:25:54.376 Record Format: 0 00:25:54.376 00:25:54.376 Discovery Log Entry 0 00:25:54.376 ---------------------- 00:25:54.376 Transport Type: 3 (TCP) 00:25:54.376 Address Family: 1 (IPv4) 00:25:54.376 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:54.376 Entry Flags: 00:25:54.376 Duplicate Returned Information: 0 00:25:54.376 Explicit Persistent Connection Support for Discovery: 0 00:25:54.376 Transport Requirements: 00:25:54.376 Secure Channel: Not Specified 00:25:54.376 Port ID: 1 (0x0001) 00:25:54.376 Controller ID: 65535 (0xffff) 00:25:54.376 Admin Max SQ Size: 32 00:25:54.376 Transport Service Identifier: 4420 00:25:54.376 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:54.376 Transport Address: 10.0.0.1 00:25:54.376 Discovery Log Entry 1 00:25:54.376 ---------------------- 00:25:54.376 Transport Type: 3 (TCP) 00:25:54.376 Address Family: 1 (IPv4) 00:25:54.376 Subsystem Type: 2 (NVM Subsystem) 00:25:54.376 Entry Flags: 00:25:54.376 Duplicate Returned Information: 0 00:25:54.376 Explicit Persistent Connection Support for Discovery: 0 00:25:54.376 Transport Requirements: 00:25:54.376 Secure Channel: Not Specified 00:25:54.376 Port ID: 1 (0x0001) 00:25:54.376 Controller ID: 65535 (0xffff) 00:25:54.376 Admin Max SQ Size: 32 00:25:54.376 Transport Service Identifier: 4420 00:25:54.376 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:54.376 Transport Address: 10.0.0.1 00:25:54.376 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:54.635 get_feature(0x01) failed 00:25:54.635 get_feature(0x02) failed 00:25:54.635 get_feature(0x04) failed 00:25:54.635 ===================================================== 00:25:54.635 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:54.635 ===================================================== 00:25:54.635 Controller Capabilities/Features 00:25:54.635 ================================ 00:25:54.635 Vendor ID: 0000 00:25:54.635 Subsystem Vendor ID: 0000 00:25:54.635 Serial Number: b8a50e77da5012d8fc98 00:25:54.635 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:54.635 Firmware Version: 6.8.9-20 00:25:54.635 Recommended Arb Burst: 6 00:25:54.635 IEEE OUI Identifier: 00 00 00 00:25:54.635 Multi-path I/O 00:25:54.635 May have multiple subsystem ports: Yes 00:25:54.635 May have multiple controllers: Yes 00:25:54.635 Associated with SR-IOV VF: No 00:25:54.635 Max Data Transfer Size: Unlimited 00:25:54.635 Max Number of Namespaces: 1024 00:25:54.635 Max Number of I/O Queues: 128 00:25:54.635 NVMe Specification Version (VS): 1.3 00:25:54.635 NVMe Specification Version (Identify): 1.3 00:25:54.635 Maximum Queue Entries: 1024 00:25:54.635 Contiguous Queues Required: No 00:25:54.635 Arbitration Mechanisms Supported 00:25:54.635 Weighted Round Robin: Not Supported 00:25:54.635 Vendor Specific: Not Supported 00:25:54.635 Reset Timeout: 7500 ms 00:25:54.636 Doorbell Stride: 4 bytes 00:25:54.636 NVM Subsystem Reset: Not Supported 00:25:54.636 Command Sets Supported 00:25:54.636 NVM Command Set: Supported 00:25:54.636 Boot Partition: Not Supported 00:25:54.636 Memory Page Size Minimum: 4096 bytes 00:25:54.636 Memory Page Size Maximum: 4096 bytes 00:25:54.636 Persistent Memory Region: Not Supported 00:25:54.636 Optional Asynchronous Events Supported 00:25:54.636 Namespace Attribute Notices: Supported 00:25:54.636 Firmware Activation Notices: Not Supported 00:25:54.636 ANA Change Notices: Supported 00:25:54.636 PLE Aggregate Log Change Notices: Not Supported 00:25:54.636 LBA Status Info Alert Notices: Not Supported 00:25:54.636 EGE Aggregate Log Change Notices: Not Supported 00:25:54.636 Normal NVM Subsystem Shutdown event: Not Supported 00:25:54.636 Zone Descriptor Change Notices: Not Supported 00:25:54.636 Discovery Log Change Notices: Not Supported 00:25:54.636 Controller Attributes 00:25:54.636 128-bit Host Identifier: Supported 00:25:54.636 Non-Operational Permissive Mode: Not Supported 00:25:54.636 NVM Sets: Not Supported 00:25:54.636 Read Recovery Levels: Not Supported 00:25:54.636 Endurance Groups: Not Supported 00:25:54.636 Predictable Latency Mode: Not Supported 00:25:54.636 Traffic Based Keep ALive: Supported 00:25:54.636 Namespace Granularity: Not Supported 00:25:54.636 SQ Associations: Not Supported 00:25:54.636 UUID List: Not Supported 00:25:54.636 Multi-Domain Subsystem: Not Supported 00:25:54.636 Fixed Capacity Management: Not Supported 00:25:54.636 Variable Capacity Management: Not Supported 00:25:54.636 Delete Endurance Group: Not Supported 00:25:54.636 Delete NVM Set: Not Supported 00:25:54.636 Extended LBA Formats Supported: Not Supported 00:25:54.636 Flexible Data Placement Supported: Not Supported 00:25:54.636 00:25:54.636 Controller Memory Buffer Support 00:25:54.636 ================================ 00:25:54.636 Supported: No 00:25:54.636 00:25:54.636 Persistent Memory Region Support 00:25:54.636 ================================ 00:25:54.636 Supported: No 00:25:54.636 00:25:54.636 Admin Command Set Attributes 00:25:54.636 ============================ 00:25:54.636 Security Send/Receive: Not Supported 00:25:54.636 Format NVM: Not Supported 00:25:54.636 Firmware Activate/Download: Not Supported 00:25:54.636 Namespace Management: Not Supported 00:25:54.636 Device Self-Test: Not Supported 00:25:54.636 Directives: Not Supported 00:25:54.636 NVMe-MI: Not Supported 00:25:54.636 Virtualization Management: Not Supported 00:25:54.636 Doorbell Buffer Config: Not Supported 00:25:54.636 Get LBA Status Capability: Not Supported 00:25:54.636 Command & Feature Lockdown Capability: Not Supported 00:25:54.636 Abort Command Limit: 4 00:25:54.636 Async Event Request Limit: 4 00:25:54.636 Number of Firmware Slots: N/A 00:25:54.636 Firmware Slot 1 Read-Only: N/A 00:25:54.636 Firmware Activation Without Reset: N/A 00:25:54.636 Multiple Update Detection Support: N/A 00:25:54.636 Firmware Update Granularity: No Information Provided 00:25:54.636 Per-Namespace SMART Log: Yes 00:25:54.636 Asymmetric Namespace Access Log Page: Supported 00:25:54.636 ANA Transition Time : 10 sec 00:25:54.636 00:25:54.636 Asymmetric Namespace Access Capabilities 00:25:54.636 ANA Optimized State : Supported 00:25:54.636 ANA Non-Optimized State : Supported 00:25:54.636 ANA Inaccessible State : Supported 00:25:54.636 ANA Persistent Loss State : Supported 00:25:54.636 ANA Change State : Supported 00:25:54.636 ANAGRPID is not changed : No 00:25:54.636 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:54.636 00:25:54.636 ANA Group Identifier Maximum : 128 00:25:54.636 Number of ANA Group Identifiers : 128 00:25:54.636 Max Number of Allowed Namespaces : 1024 00:25:54.636 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:54.636 Command Effects Log Page: Supported 00:25:54.636 Get Log Page Extended Data: Supported 00:25:54.636 Telemetry Log Pages: Not Supported 00:25:54.636 Persistent Event Log Pages: Not Supported 00:25:54.636 Supported Log Pages Log Page: May Support 00:25:54.636 Commands Supported & Effects Log Page: Not Supported 00:25:54.636 Feature Identifiers & Effects Log Page:May Support 00:25:54.636 NVMe-MI Commands & Effects Log Page: May Support 00:25:54.636 Data Area 4 for Telemetry Log: Not Supported 00:25:54.636 Error Log Page Entries Supported: 128 00:25:54.636 Keep Alive: Supported 00:25:54.636 Keep Alive Granularity: 1000 ms 00:25:54.636 00:25:54.636 NVM Command Set Attributes 00:25:54.636 ========================== 00:25:54.636 Submission Queue Entry Size 00:25:54.636 Max: 64 00:25:54.636 Min: 64 00:25:54.636 Completion Queue Entry Size 00:25:54.636 Max: 16 00:25:54.636 Min: 16 00:25:54.636 Number of Namespaces: 1024 00:25:54.636 Compare Command: Not Supported 00:25:54.636 Write Uncorrectable Command: Not Supported 00:25:54.636 Dataset Management Command: Supported 00:25:54.636 Write Zeroes Command: Supported 00:25:54.636 Set Features Save Field: Not Supported 00:25:54.636 Reservations: Not Supported 00:25:54.636 Timestamp: Not Supported 00:25:54.636 Copy: Not Supported 00:25:54.636 Volatile Write Cache: Present 00:25:54.636 Atomic Write Unit (Normal): 1 00:25:54.636 Atomic Write Unit (PFail): 1 00:25:54.636 Atomic Compare & Write Unit: 1 00:25:54.636 Fused Compare & Write: Not Supported 00:25:54.636 Scatter-Gather List 00:25:54.636 SGL Command Set: Supported 00:25:54.636 SGL Keyed: Not Supported 00:25:54.636 SGL Bit Bucket Descriptor: Not Supported 00:25:54.636 SGL Metadata Pointer: Not Supported 00:25:54.636 Oversized SGL: Not Supported 00:25:54.636 SGL Metadata Address: Not Supported 00:25:54.636 SGL Offset: Supported 00:25:54.636 Transport SGL Data Block: Not Supported 00:25:54.636 Replay Protected Memory Block: Not Supported 00:25:54.636 00:25:54.636 Firmware Slot Information 00:25:54.636 ========================= 00:25:54.636 Active slot: 0 00:25:54.636 00:25:54.636 Asymmetric Namespace Access 00:25:54.636 =========================== 00:25:54.636 Change Count : 0 00:25:54.636 Number of ANA Group Descriptors : 1 00:25:54.636 ANA Group Descriptor : 0 00:25:54.636 ANA Group ID : 1 00:25:54.636 Number of NSID Values : 1 00:25:54.636 Change Count : 0 00:25:54.636 ANA State : 1 00:25:54.636 Namespace Identifier : 1 00:25:54.636 00:25:54.636 Commands Supported and Effects 00:25:54.636 ============================== 00:25:54.636 Admin Commands 00:25:54.636 -------------- 00:25:54.636 Get Log Page (02h): Supported 00:25:54.636 Identify (06h): Supported 00:25:54.636 Abort (08h): Supported 00:25:54.636 Set Features (09h): Supported 00:25:54.636 Get Features (0Ah): Supported 00:25:54.636 Asynchronous Event Request (0Ch): Supported 00:25:54.636 Keep Alive (18h): Supported 00:25:54.636 I/O Commands 00:25:54.636 ------------ 00:25:54.636 Flush (00h): Supported 00:25:54.636 Write (01h): Supported LBA-Change 00:25:54.636 Read (02h): Supported 00:25:54.636 Write Zeroes (08h): Supported LBA-Change 00:25:54.636 Dataset Management (09h): Supported 00:25:54.636 00:25:54.636 Error Log 00:25:54.636 ========= 00:25:54.636 Entry: 0 00:25:54.636 Error Count: 0x3 00:25:54.636 Submission Queue Id: 0x0 00:25:54.636 Command Id: 0x5 00:25:54.636 Phase Bit: 0 00:25:54.636 Status Code: 0x2 00:25:54.636 Status Code Type: 0x0 00:25:54.636 Do Not Retry: 1 00:25:54.636 Error Location: 0x28 00:25:54.636 LBA: 0x0 00:25:54.636 Namespace: 0x0 00:25:54.636 Vendor Log Page: 0x0 00:25:54.636 ----------- 00:25:54.636 Entry: 1 00:25:54.636 Error Count: 0x2 00:25:54.636 Submission Queue Id: 0x0 00:25:54.636 Command Id: 0x5 00:25:54.636 Phase Bit: 0 00:25:54.636 Status Code: 0x2 00:25:54.636 Status Code Type: 0x0 00:25:54.636 Do Not Retry: 1 00:25:54.636 Error Location: 0x28 00:25:54.636 LBA: 0x0 00:25:54.636 Namespace: 0x0 00:25:54.636 Vendor Log Page: 0x0 00:25:54.636 ----------- 00:25:54.636 Entry: 2 00:25:54.636 Error Count: 0x1 00:25:54.636 Submission Queue Id: 0x0 00:25:54.636 Command Id: 0x4 00:25:54.636 Phase Bit: 0 00:25:54.636 Status Code: 0x2 00:25:54.636 Status Code Type: 0x0 00:25:54.636 Do Not Retry: 1 00:25:54.636 Error Location: 0x28 00:25:54.636 LBA: 0x0 00:25:54.636 Namespace: 0x0 00:25:54.636 Vendor Log Page: 0x0 00:25:54.636 00:25:54.636 Number of Queues 00:25:54.636 ================ 00:25:54.636 Number of I/O Submission Queues: 128 00:25:54.636 Number of I/O Completion Queues: 128 00:25:54.636 00:25:54.636 ZNS Specific Controller Data 00:25:54.637 ============================ 00:25:54.637 Zone Append Size Limit: 0 00:25:54.637 00:25:54.637 00:25:54.637 Active Namespaces 00:25:54.637 ================= 00:25:54.637 get_feature(0x05) failed 00:25:54.637 Namespace ID:1 00:25:54.637 Command Set Identifier: NVM (00h) 00:25:54.637 Deallocate: Supported 00:25:54.637 Deallocated/Unwritten Error: Not Supported 00:25:54.637 Deallocated Read Value: Unknown 00:25:54.637 Deallocate in Write Zeroes: Not Supported 00:25:54.637 Deallocated Guard Field: 0xFFFF 00:25:54.637 Flush: Supported 00:25:54.637 Reservation: Not Supported 00:25:54.637 Namespace Sharing Capabilities: Multiple Controllers 00:25:54.637 Size (in LBAs): 1953525168 (931GiB) 00:25:54.637 Capacity (in LBAs): 1953525168 (931GiB) 00:25:54.637 Utilization (in LBAs): 1953525168 (931GiB) 00:25:54.637 UUID: a600648c-15b7-4b08-96eb-9eebede99dd2 00:25:54.637 Thin Provisioning: Not Supported 00:25:54.637 Per-NS Atomic Units: Yes 00:25:54.637 Atomic Boundary Size (Normal): 0 00:25:54.637 Atomic Boundary Size (PFail): 0 00:25:54.637 Atomic Boundary Offset: 0 00:25:54.637 NGUID/EUI64 Never Reused: No 00:25:54.637 ANA group ID: 1 00:25:54.637 Namespace Write Protected: No 00:25:54.637 Number of LBA Formats: 1 00:25:54.637 Current LBA Format: LBA Format #00 00:25:54.637 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:54.637 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:54.637 rmmod nvme_tcp 00:25:54.637 rmmod nvme_fabrics 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.637 19:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.545 19:24:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:56.545 19:24:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:56.545 19:24:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:56.545 19:24:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:56.805 19:24:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:56.805 19:24:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:56.805 19:24:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:56.805 19:24:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:56.805 19:24:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:56.805 19:24:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:56.805 19:24:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:58.183 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:58.183 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:58.183 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:58.183 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:58.183 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:58.183 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:58.183 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:58.183 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:58.183 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:58.183 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:58.183 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:58.183 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:58.183 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:58.183 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:58.183 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:58.183 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:59.120 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:25:59.120 00:25:59.120 real 0m9.769s 00:25:59.120 user 0m2.111s 00:25:59.120 sys 0m3.647s 00:25:59.120 19:24:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:59.120 19:24:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:59.120 ************************************ 00:25:59.120 END TEST nvmf_identify_kernel_target 00:25:59.120 ************************************ 00:25:59.120 19:24:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:59.120 19:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:59.120 19:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:59.120 19:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.120 ************************************ 00:25:59.120 START TEST nvmf_auth_host 00:25:59.120 ************************************ 00:25:59.120 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:59.120 * Looking for test storage... 00:25:59.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:59.120 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:59.120 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:59.120 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:59.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.379 --rc genhtml_branch_coverage=1 00:25:59.379 --rc genhtml_function_coverage=1 00:25:59.379 --rc genhtml_legend=1 00:25:59.379 --rc geninfo_all_blocks=1 00:25:59.379 --rc geninfo_unexecuted_blocks=1 00:25:59.379 00:25:59.379 ' 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:59.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.379 --rc genhtml_branch_coverage=1 00:25:59.379 --rc genhtml_function_coverage=1 00:25:59.379 --rc genhtml_legend=1 00:25:59.379 --rc geninfo_all_blocks=1 00:25:59.379 --rc geninfo_unexecuted_blocks=1 00:25:59.379 00:25:59.379 ' 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:59.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.379 --rc genhtml_branch_coverage=1 00:25:59.379 --rc genhtml_function_coverage=1 00:25:59.379 --rc genhtml_legend=1 00:25:59.379 --rc geninfo_all_blocks=1 00:25:59.379 --rc geninfo_unexecuted_blocks=1 00:25:59.379 00:25:59.379 ' 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:59.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.379 --rc genhtml_branch_coverage=1 00:25:59.379 --rc genhtml_function_coverage=1 00:25:59.379 --rc genhtml_legend=1 00:25:59.379 --rc geninfo_all_blocks=1 00:25:59.379 --rc geninfo_unexecuted_blocks=1 00:25:59.379 00:25:59.379 ' 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.379 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:59.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:59.380 19:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:01.296 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:01.297 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:01.297 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:01.297 Found net devices under 0000:84:00.0: cvl_0_0 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:01.297 Found net devices under 0000:84:00.1: cvl_0_1 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:01.297 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:01.557 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:01.557 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:01.557 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:01.557 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:01.557 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:01.557 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:01.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:26:01.558 00:26:01.558 --- 10.0.0.2 ping statistics --- 00:26:01.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.558 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:01.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:26:01.558 00:26:01.558 --- 10.0.0.1 ping statistics --- 00:26:01.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.558 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=309802 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 309802 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 309802 ']' 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:01.558 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e31d97e2aff490b3106d29c3e242b914 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.t0T 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e31d97e2aff490b3106d29c3e242b914 0 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e31d97e2aff490b3106d29c3e242b914 0 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e31d97e2aff490b3106d29c3e242b914 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:01.816 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.t0T 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.t0T 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.t0T 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=af059d30aaad6544a83c9807e9dfd82a59160985564b711b4b4785030afbdffc 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.2zK 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key af059d30aaad6544a83c9807e9dfd82a59160985564b711b4b4785030afbdffc 3 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 af059d30aaad6544a83c9807e9dfd82a59160985564b711b4b4785030afbdffc 3 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=af059d30aaad6544a83c9807e9dfd82a59160985564b711b4b4785030afbdffc 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.2zK 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.2zK 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.2zK 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=97e9901317fdaba7a47eac2f3e66bc3c39f6c21807ed5b4a 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wDx 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 97e9901317fdaba7a47eac2f3e66bc3c39f6c21807ed5b4a 0 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 97e9901317fdaba7a47eac2f3e66bc3c39f6c21807ed5b4a 0 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=97e9901317fdaba7a47eac2f3e66bc3c39f6c21807ed5b4a 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wDx 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wDx 00:26:02.076 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.wDx 00:26:02.077 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:02.077 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:02.077 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.077 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:02.077 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:02.077 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:02.077 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:02.077 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d13797c2e758fa8e1f788e822e5a5951ee86240b498ac39c 00:26:02.077 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:02.077 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.wFx 00:26:02.077 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d13797c2e758fa8e1f788e822e5a5951ee86240b498ac39c 2 00:26:02.077 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d13797c2e758fa8e1f788e822e5a5951ee86240b498ac39c 2 00:26:02.077 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:02.077 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:02.077 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d13797c2e758fa8e1f788e822e5a5951ee86240b498ac39c 00:26:02.077 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:02.077 19:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.wFx 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.wFx 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.wFx 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=77c8e7fff92e050b33aab195b4fbc24e 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.UYb 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 77c8e7fff92e050b33aab195b4fbc24e 1 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 77c8e7fff92e050b33aab195b4fbc24e 1 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=77c8e7fff92e050b33aab195b4fbc24e 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.UYb 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.UYb 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.UYb 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=52a548966ee7c8f02da698afd008038e 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.vpW 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 52a548966ee7c8f02da698afd008038e 1 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 52a548966ee7c8f02da698afd008038e 1 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=52a548966ee7c8f02da698afd008038e 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:02.077 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.vpW 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.vpW 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.vpW 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f2188e95225ef828d88dc7abf77aba27f7ef078913f5cfdc 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xRz 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f2188e95225ef828d88dc7abf77aba27f7ef078913f5cfdc 2 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f2188e95225ef828d88dc7abf77aba27f7ef078913f5cfdc 2 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f2188e95225ef828d88dc7abf77aba27f7ef078913f5cfdc 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xRz 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xRz 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.xRz 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0c479e33b18499d1a4f638544b3922f8 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.zCd 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0c479e33b18499d1a4f638544b3922f8 0 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0c479e33b18499d1a4f638544b3922f8 0 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0c479e33b18499d1a4f638544b3922f8 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.zCd 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.zCd 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.zCd 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ba188ef90164e0d4baa76f6da028d7f59c8f640f1debf5d9ece4f8ce3b91ad6d 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.JEF 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ba188ef90164e0d4baa76f6da028d7f59c8f640f1debf5d9ece4f8ce3b91ad6d 3 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ba188ef90164e0d4baa76f6da028d7f59c8f640f1debf5d9ece4f8ce3b91ad6d 3 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ba188ef90164e0d4baa76f6da028d7f59c8f640f1debf5d9ece4f8ce3b91ad6d 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.JEF 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.JEF 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.JEF 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 309802 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 309802 ']' 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:02.336 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.595 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:02.595 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:02.595 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.595 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.t0T 00:26:02.595 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.595 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.595 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.595 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.2zK ]] 00:26:02.595 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2zK 00:26:02.595 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.595 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.595 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.595 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.595 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.wDx 00:26:02.595 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.595 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.595 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.wFx ]] 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wFx 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.UYb 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.vpW ]] 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vpW 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.xRz 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.zCd ]] 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.zCd 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.JEF 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:02.596 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:02.855 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:02.855 19:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:03.790 Waiting for block devices as requested 00:26:03.790 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:26:04.049 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:04.049 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:04.308 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:04.308 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:04.308 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:04.308 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:04.566 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:04.566 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:04.566 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:04.566 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:04.825 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:04.825 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:04.825 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:04.825 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:04.825 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:05.084 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:05.344 No valid GPT data, bailing 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:05.344 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:26:05.604 00:26:05.604 Discovery Log Number of Records 2, Generation counter 2 00:26:05.604 =====Discovery Log Entry 0====== 00:26:05.604 trtype: tcp 00:26:05.604 adrfam: ipv4 00:26:05.604 subtype: current discovery subsystem 00:26:05.604 treq: not specified, sq flow control disable supported 00:26:05.604 portid: 1 00:26:05.604 trsvcid: 4420 00:26:05.604 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:05.604 traddr: 10.0.0.1 00:26:05.604 eflags: none 00:26:05.604 sectype: none 00:26:05.604 =====Discovery Log Entry 1====== 00:26:05.604 trtype: tcp 00:26:05.604 adrfam: ipv4 00:26:05.604 subtype: nvme subsystem 00:26:05.604 treq: not specified, sq flow control disable supported 00:26:05.604 portid: 1 00:26:05.604 trsvcid: 4420 00:26:05.604 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:05.604 traddr: 10.0.0.1 00:26:05.604 eflags: none 00:26:05.604 sectype: none 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.604 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.862 nvme0n1 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: ]] 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:05.862 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.863 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.122 nvme0n1 00:26:06.122 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.122 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.122 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.122 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.122 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.122 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.122 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.122 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.122 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.122 19:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.122 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.382 nvme0n1 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: ]] 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.382 nvme0n1 00:26:06.382 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: ]] 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.640 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.641 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.641 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.641 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.641 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.641 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.641 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.641 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.641 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:06.641 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.641 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.641 nvme0n1 00:26:06.641 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.641 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.641 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.641 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.641 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.641 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.899 nvme0n1 00:26:06.899 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.900 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.900 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.900 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.900 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.900 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.900 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.900 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.900 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.900 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.159 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.159 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.160 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.160 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:07.160 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.160 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.160 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:07.160 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:07.160 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:07.160 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:07.160 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.160 19:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: ]] 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.418 nvme0n1 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.418 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.676 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.677 nvme0n1 00:26:07.677 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.677 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.677 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.677 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.677 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.677 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: ]] 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.935 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.193 nvme0n1 00:26:08.193 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.193 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.193 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.193 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.193 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.193 19:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: ]] 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:08.193 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.194 nvme0n1 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.194 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.452 nvme0n1 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.452 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.710 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.710 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.710 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.711 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.711 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.711 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.711 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.711 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:08.711 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.711 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.711 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:08.711 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:08.711 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:08.711 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:08.711 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.711 19:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: ]] 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.277 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.278 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.278 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.278 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.278 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.278 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.278 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.278 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.278 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.278 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.536 nvme0n1 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:09.536 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.537 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.795 nvme0n1 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: ]] 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:09.795 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.796 19:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.055 nvme0n1 00:26:10.055 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.055 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.055 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.055 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.055 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.055 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: ]] 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.312 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.313 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.313 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.313 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.313 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.313 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:10.313 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.313 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.571 nvme0n1 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.571 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.572 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.572 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.572 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.572 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:10.572 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.572 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.828 nvme0n1 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.828 19:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: ]] 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.725 19:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.291 nvme0n1 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.291 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.857 nvme0n1 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: ]] 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.857 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.858 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.858 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.858 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.858 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.858 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.858 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.858 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.858 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.858 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.858 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.858 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.858 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:13.858 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.858 19:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.421 nvme0n1 00:26:14.421 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.421 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.421 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.421 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.421 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.421 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: ]] 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.422 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.988 nvme0n1 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.988 19:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.555 nvme0n1 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:15.555 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: ]] 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.556 19:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.489 nvme0n1 00:26:16.489 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.489 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.489 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.489 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.489 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.489 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.489 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.490 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.749 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.749 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.750 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.750 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.750 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.750 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.750 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.750 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.750 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.750 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.750 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.750 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.750 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.750 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.750 19:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.684 nvme0n1 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: ]] 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.684 19:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.623 nvme0n1 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: ]] 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.623 19:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.562 nvme0n1 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.562 19:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.508 nvme0n1 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: ]] 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.508 nvme0n1 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.508 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:20.765 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.766 nvme0n1 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.766 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: ]] 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.025 19:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.025 nvme0n1 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: ]] 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.025 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.284 nvme0n1 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.284 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.543 nvme0n1 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: ]] 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.543 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.803 nvme0n1 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:21.803 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.804 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.804 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.804 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.804 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.804 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.804 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.804 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.804 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.804 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.804 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.804 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.804 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.804 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.804 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:21.804 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.804 19:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.065 nvme0n1 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: ]] 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.065 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.324 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.324 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.324 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.324 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.324 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.324 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:22.324 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.324 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.324 nvme0n1 00:26:22.325 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.325 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.325 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.325 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.325 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.325 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.325 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.325 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.325 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.325 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.582 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.582 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.582 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:22.582 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.582 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.582 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:22.582 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: ]] 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.583 nvme0n1 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.583 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.840 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.841 nvme0n1 00:26:22.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.841 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: ]] 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.100 19:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.359 nvme0n1 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.359 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:23.360 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.360 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.360 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.360 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.360 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.360 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.360 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.360 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.360 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.360 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.360 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.360 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.360 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.360 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.360 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.360 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.360 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.929 nvme0n1 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: ]] 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.929 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.930 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.930 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.930 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.930 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.930 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:23.930 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.930 19:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.188 nvme0n1 00:26:24.188 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: ]] 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.189 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.756 nvme0n1 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.756 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.757 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.757 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.757 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.757 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.757 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.757 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.757 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.757 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.757 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.757 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:24.757 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.757 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.015 nvme0n1 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:25.015 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: ]] 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.016 19:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.584 nvme0n1 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.584 19:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.153 nvme0n1 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: ]] 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.154 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.720 nvme0n1 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: ]] 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.720 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:26.721 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.721 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:26.721 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.721 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.721 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.979 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.979 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.979 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.979 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.979 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.979 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.979 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.979 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.979 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.979 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.979 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.979 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:26.979 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.979 19:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.547 nvme0n1 00:26:27.547 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.547 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.548 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.117 nvme0n1 00:26:28.117 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.117 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.117 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.117 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.117 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.117 19:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: ]] 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.117 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.051 nvme0n1 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.051 19:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.986 nvme0n1 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: ]] 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.986 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.987 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.987 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.987 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.987 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.987 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.987 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.987 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:29.987 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.987 19:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.916 nvme0n1 00:26:30.916 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.916 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.916 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.916 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.916 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.916 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.916 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.916 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.916 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: ]] 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.917 19:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.896 nvme0n1 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.896 19:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.834 nvme0n1 00:26:32.834 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.834 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.834 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.834 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.834 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.834 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.834 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.834 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.834 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.834 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: ]] 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.095 19:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.095 nvme0n1 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.095 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.355 nvme0n1 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: ]] 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.355 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:33.356 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.356 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.356 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.356 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.356 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.356 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.356 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.356 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.356 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.356 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.356 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.356 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.356 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.356 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.356 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:33.356 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.356 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.616 nvme0n1 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: ]] 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.616 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.876 nvme0n1 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.876 19:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.135 nvme0n1 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: ]] 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.135 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.394 nvme0n1 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.394 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.395 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.656 nvme0n1 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: ]] 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.656 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.917 nvme0n1 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: ]] 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.917 19:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.177 nvme0n1 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.177 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.436 nvme0n1 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: ]] 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.436 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.695 nvme0n1 00:26:35.695 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.695 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.695 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.695 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.695 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.956 19:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.215 nvme0n1 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:36.215 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: ]] 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.216 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.784 nvme0n1 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: ]] 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.784 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.785 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.785 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.785 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.785 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:36.785 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.785 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.045 nvme0n1 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.045 19:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.304 nvme0n1 00:26:37.304 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.304 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.304 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.304 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.304 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.304 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: ]] 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.562 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.128 nvme0n1 00:26:38.128 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.128 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.128 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.128 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.128 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.128 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.128 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.128 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.128 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.128 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.128 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.128 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.128 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:38.128 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.128 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.129 19:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.700 nvme0n1 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: ]] 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.700 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.701 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.701 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.701 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.701 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:38.701 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.701 19:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.271 nvme0n1 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: ]] 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.271 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.838 nvme0n1 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:39.838 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.839 19:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.407 nvme0n1 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTMxZDk3ZTJhZmY0OTBiMzEwNmQyOWMzZTI0MmI5MTSRMw5d: 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: ]] 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWYwNTlkMzBhYWFkNjU0NGE4M2M5ODA3ZTlkZmQ4MmE1OTE2MDk4NTU2NGI3MTFiNGI0Nzg1MDMwYWZiZGZmY1Vpw3w=: 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.407 19:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.345 nvme0n1 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:41.345 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.346 19:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.282 nvme0n1 00:26:42.282 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.282 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.282 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.282 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.282 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.282 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: ]] 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:42.541 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.542 19:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.479 nvme0n1 00:26:43.479 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.479 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.479 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.479 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjIxODhlOTUyMjVlZjgyOGQ4OGRjN2FiZjc3YWJhMjdmN2VmMDc4OTEzZjVjZmRjr9F7zg==: 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: ]] 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGM0NzllMzNiMTg0OTlkMWE0ZjYzODU0NGIzOTIyZjgpJrPy: 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.480 19:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.418 nvme0n1 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExODhlZjkwMTY0ZTBkNGJhYTc2ZjZkYTAyOGQ3ZjU5YzhmNjQwZjFkZWJmNWQ5ZWNlNGY4Y2UzYjkxYWQ2ZPGK8JU=: 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.418 19:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.354 nvme0n1 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.354 request: 00:26:45.354 { 00:26:45.354 "name": "nvme0", 00:26:45.354 "trtype": "tcp", 00:26:45.354 "traddr": "10.0.0.1", 00:26:45.354 "adrfam": "ipv4", 00:26:45.354 "trsvcid": "4420", 00:26:45.354 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:45.354 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:45.354 "prchk_reftag": false, 00:26:45.354 "prchk_guard": false, 00:26:45.354 "hdgst": false, 00:26:45.354 "ddgst": false, 00:26:45.354 "allow_unrecognized_csi": false, 00:26:45.354 "method": "bdev_nvme_attach_controller", 00:26:45.354 "req_id": 1 00:26:45.354 } 00:26:45.354 Got JSON-RPC error response 00:26:45.354 response: 00:26:45.354 { 00:26:45.354 "code": -5, 00:26:45.354 "message": "Input/output error" 00:26:45.354 } 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.354 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.355 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.613 request: 00:26:45.613 { 00:26:45.613 "name": "nvme0", 00:26:45.613 "trtype": "tcp", 00:26:45.613 "traddr": "10.0.0.1", 00:26:45.613 "adrfam": "ipv4", 00:26:45.613 "trsvcid": "4420", 00:26:45.613 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:45.613 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:45.613 "prchk_reftag": false, 00:26:45.613 "prchk_guard": false, 00:26:45.613 "hdgst": false, 00:26:45.613 "ddgst": false, 00:26:45.613 "dhchap_key": "key2", 00:26:45.613 "allow_unrecognized_csi": false, 00:26:45.613 "method": "bdev_nvme_attach_controller", 00:26:45.613 "req_id": 1 00:26:45.613 } 00:26:45.613 Got JSON-RPC error response 00:26:45.613 response: 00:26:45.613 { 00:26:45.613 "code": -5, 00:26:45.613 "message": "Input/output error" 00:26:45.613 } 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.613 request: 00:26:45.613 { 00:26:45.613 "name": "nvme0", 00:26:45.613 "trtype": "tcp", 00:26:45.613 "traddr": "10.0.0.1", 00:26:45.613 "adrfam": "ipv4", 00:26:45.613 "trsvcid": "4420", 00:26:45.613 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:45.613 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:45.613 "prchk_reftag": false, 00:26:45.613 "prchk_guard": false, 00:26:45.613 "hdgst": false, 00:26:45.613 "ddgst": false, 00:26:45.613 "dhchap_key": "key1", 00:26:45.613 "dhchap_ctrlr_key": "ckey2", 00:26:45.613 "allow_unrecognized_csi": false, 00:26:45.613 "method": "bdev_nvme_attach_controller", 00:26:45.613 "req_id": 1 00:26:45.613 } 00:26:45.613 Got JSON-RPC error response 00:26:45.613 response: 00:26:45.613 { 00:26:45.613 "code": -5, 00:26:45.613 "message": "Input/output error" 00:26:45.613 } 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.613 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.614 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.614 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.614 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.614 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.614 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.614 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.614 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.614 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.614 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:45.614 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.614 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.873 nvme0n1 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: ]] 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.873 request: 00:26:45.873 { 00:26:45.873 "name": "nvme0", 00:26:45.873 "dhchap_key": "key1", 00:26:45.873 "dhchap_ctrlr_key": "ckey2", 00:26:45.873 "method": "bdev_nvme_set_keys", 00:26:45.873 "req_id": 1 00:26:45.873 } 00:26:45.873 Got JSON-RPC error response 00:26:45.873 response: 00:26:45.873 { 00:26:45.873 "code": -13, 00:26:45.873 "message": "Permission denied" 00:26:45.873 } 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:45.873 19:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:47.254 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.254 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:47.254 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.254 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.254 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.254 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:47.254 19:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdlOTkwMTMxN2ZkYWJhN2E0N2VhYzJmM2U2NmJjM2MzOWY2YzIxODA3ZWQ1YjRhvNRZoQ==: 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: ]] 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDEzNzk3YzJlNzU4ZmE4ZTFmNzg4ZTgyMmU1YTU5NTFlZTg2MjQwYjQ5OGFjMzljZhakjQ==: 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.195 19:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.195 nvme0n1 00:26:48.195 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.195 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:48.195 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.195 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:48.195 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:48.195 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:48.195 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:48.195 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:48.195 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:48.195 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:48.195 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzdjOGU3ZmZmOTJlMDUwYjMzYWFiMTk1YjRmYmMyNGW3is8p: 00:26:48.195 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: ]] 00:26:48.195 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhNTQ4OTY2ZWU3YzhmMDJkYTY5OGFmZDAwODAzOGVOJjge: 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.196 request: 00:26:48.196 { 00:26:48.196 "name": "nvme0", 00:26:48.196 "dhchap_key": "key2", 00:26:48.196 "dhchap_ctrlr_key": "ckey1", 00:26:48.196 "method": "bdev_nvme_set_keys", 00:26:48.196 "req_id": 1 00:26:48.196 } 00:26:48.196 Got JSON-RPC error response 00:26:48.196 response: 00:26:48.196 { 00:26:48.196 "code": -13, 00:26:48.196 "message": "Permission denied" 00:26:48.196 } 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:48.196 19:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:49.574 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.574 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.574 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:49.574 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.574 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.574 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:49.574 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:49.574 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:49.574 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:49.574 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:49.575 rmmod nvme_tcp 00:26:49.575 rmmod nvme_fabrics 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 309802 ']' 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 309802 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 309802 ']' 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 309802 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 309802 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 309802' 00:26:49.575 killing process with pid 309802 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 309802 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 309802 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.575 19:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.110 19:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:52.110 19:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:52.110 19:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:52.110 19:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:52.110 19:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:52.110 19:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:52.110 19:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:52.110 19:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:52.110 19:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:52.110 19:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:52.110 19:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:52.110 19:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:52.110 19:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:53.043 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:53.043 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:53.043 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:53.043 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:53.043 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:53.043 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:53.043 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:53.043 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:53.043 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:53.043 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:53.043 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:53.043 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:53.043 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:53.043 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:53.043 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:53.043 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:53.978 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:26:53.978 19:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.t0T /tmp/spdk.key-null.wDx /tmp/spdk.key-sha256.UYb /tmp/spdk.key-sha384.xRz /tmp/spdk.key-sha512.JEF /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:53.978 19:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:55.354 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:55.354 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:55.354 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:55.354 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:55.354 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:55.354 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:55.354 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:55.354 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:55.354 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:55.354 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:55.354 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:55.354 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:55.354 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:55.354 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:55.354 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:55.354 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:55.354 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:55.354 00:26:55.354 real 0m56.262s 00:26:55.354 user 0m53.719s 00:26:55.354 sys 0m6.305s 00:26:55.354 19:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:55.354 19:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.354 ************************************ 00:26:55.354 END TEST nvmf_auth_host 00:26:55.354 ************************************ 00:26:55.354 19:25:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:55.354 19:25:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:55.354 19:25:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:55.354 19:25:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:55.354 19:25:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.354 ************************************ 00:26:55.354 START TEST nvmf_digest 00:26:55.354 ************************************ 00:26:55.354 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:55.613 * Looking for test storage... 00:26:55.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:55.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.613 --rc genhtml_branch_coverage=1 00:26:55.613 --rc genhtml_function_coverage=1 00:26:55.613 --rc genhtml_legend=1 00:26:55.613 --rc geninfo_all_blocks=1 00:26:55.613 --rc geninfo_unexecuted_blocks=1 00:26:55.613 00:26:55.613 ' 00:26:55.613 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:55.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.613 --rc genhtml_branch_coverage=1 00:26:55.613 --rc genhtml_function_coverage=1 00:26:55.614 --rc genhtml_legend=1 00:26:55.614 --rc geninfo_all_blocks=1 00:26:55.614 --rc geninfo_unexecuted_blocks=1 00:26:55.614 00:26:55.614 ' 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:55.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.614 --rc genhtml_branch_coverage=1 00:26:55.614 --rc genhtml_function_coverage=1 00:26:55.614 --rc genhtml_legend=1 00:26:55.614 --rc geninfo_all_blocks=1 00:26:55.614 --rc geninfo_unexecuted_blocks=1 00:26:55.614 00:26:55.614 ' 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:55.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.614 --rc genhtml_branch_coverage=1 00:26:55.614 --rc genhtml_function_coverage=1 00:26:55.614 --rc genhtml_legend=1 00:26:55.614 --rc geninfo_all_blocks=1 00:26:55.614 --rc geninfo_unexecuted_blocks=1 00:26:55.614 00:26:55.614 ' 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:55.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:55.614 19:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:58.145 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:58.145 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:58.145 Found net devices under 0000:84:00.0: cvl_0_0 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:58.145 Found net devices under 0000:84:00.1: cvl_0_1 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:58.145 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:58.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:26:58.146 00:26:58.146 --- 10.0.0.2 ping statistics --- 00:26:58.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.146 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:26:58.146 00:26:58.146 --- 10.0.0.1 ping statistics --- 00:26:58.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.146 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:58.146 ************************************ 00:26:58.146 START TEST nvmf_digest_clean 00:26:58.146 ************************************ 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=320113 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 320113 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 320113 ']' 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:58.146 19:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:58.146 [2024-12-06 19:25:43.022777] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:26:58.146 [2024-12-06 19:25:43.022857] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.146 [2024-12-06 19:25:43.092784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.146 [2024-12-06 19:25:43.145396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.146 [2024-12-06 19:25:43.145458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.146 [2024-12-06 19:25:43.145495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.146 [2024-12-06 19:25:43.145507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.146 [2024-12-06 19:25:43.145517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.146 [2024-12-06 19:25:43.146207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:58.404 null0 00:26:58.404 [2024-12-06 19:25:43.372250] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.404 [2024-12-06 19:25:43.396497] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=320139 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 320139 /var/tmp/bperf.sock 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 320139 ']' 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:58.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:58.404 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:58.405 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:58.405 [2024-12-06 19:25:43.446610] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:26:58.405 [2024-12-06 19:25:43.446701] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320139 ] 00:26:58.662 [2024-12-06 19:25:43.515595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.662 [2024-12-06 19:25:43.575079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.662 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:58.662 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:58.662 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:58.662 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:58.662 19:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:59.231 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.231 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.489 nvme0n1 00:26:59.489 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:59.489 19:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:59.749 Running I/O for 2 seconds... 00:27:01.624 19725.00 IOPS, 77.05 MiB/s [2024-12-06T18:25:46.673Z] 20144.00 IOPS, 78.69 MiB/s 00:27:01.624 Latency(us) 00:27:01.624 [2024-12-06T18:25:46.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.624 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:01.624 nvme0n1 : 2.01 20152.29 78.72 0.00 0.00 6345.42 3155.44 18738.44 00:27:01.624 [2024-12-06T18:25:46.673Z] =================================================================================================================== 00:27:01.624 [2024-12-06T18:25:46.673Z] Total : 20152.29 78.72 0.00 0.00 6345.42 3155.44 18738.44 00:27:01.624 { 00:27:01.624 "results": [ 00:27:01.624 { 00:27:01.624 "job": "nvme0n1", 00:27:01.624 "core_mask": "0x2", 00:27:01.624 "workload": "randread", 00:27:01.624 "status": "finished", 00:27:01.624 "queue_depth": 128, 00:27:01.624 "io_size": 4096, 00:27:01.624 "runtime": 2.005529, 00:27:01.624 "iops": 20152.288997067608, 00:27:01.624 "mibps": 78.71987889479534, 00:27:01.624 "io_failed": 0, 00:27:01.624 "io_timeout": 0, 00:27:01.624 "avg_latency_us": 6345.424502507258, 00:27:01.624 "min_latency_us": 3155.437037037037, 00:27:01.624 "max_latency_us": 18738.44148148148 00:27:01.624 } 00:27:01.624 ], 00:27:01.624 "core_count": 1 00:27:01.624 } 00:27:01.624 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:01.624 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:01.624 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:01.624 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:01.624 | select(.opcode=="crc32c") 00:27:01.624 | "\(.module_name) \(.executed)"' 00:27:01.624 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:01.882 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:01.882 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:01.882 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:01.882 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:01.882 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 320139 00:27:01.882 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 320139 ']' 00:27:01.882 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 320139 00:27:01.882 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:01.882 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:01.882 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 320139 00:27:01.882 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:01.882 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:01.882 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 320139' 00:27:01.882 killing process with pid 320139 00:27:01.882 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 320139 00:27:01.882 Received shutdown signal, test time was about 2.000000 seconds 00:27:01.882 00:27:01.882 Latency(us) 00:27:01.882 [2024-12-06T18:25:46.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.882 [2024-12-06T18:25:46.931Z] =================================================================================================================== 00:27:01.882 [2024-12-06T18:25:46.931Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:01.883 19:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 320139 00:27:02.141 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:02.141 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:02.141 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:02.141 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:02.141 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:02.141 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:02.141 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:02.141 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=320664 00:27:02.141 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:02.141 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 320664 /var/tmp/bperf.sock 00:27:02.141 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 320664 ']' 00:27:02.141 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:02.141 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.141 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:02.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:02.141 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.141 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:02.141 [2024-12-06 19:25:47.161442] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:27:02.141 [2024-12-06 19:25:47.161516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320664 ] 00:27:02.141 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:02.141 Zero copy mechanism will not be used. 00:27:02.421 [2024-12-06 19:25:47.227452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.421 [2024-12-06 19:25:47.282357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.421 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.421 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:02.421 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:02.421 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:02.421 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:02.764 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:02.764 19:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.361 nvme0n1 00:27:03.361 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:03.361 19:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:03.361 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:03.361 Zero copy mechanism will not be used. 00:27:03.361 Running I/O for 2 seconds... 00:27:05.304 5043.00 IOPS, 630.38 MiB/s [2024-12-06T18:25:50.353Z] 5096.00 IOPS, 637.00 MiB/s 00:27:05.304 Latency(us) 00:27:05.304 [2024-12-06T18:25:50.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.304 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:05.304 nvme0n1 : 2.00 5097.77 637.22 0.00 0.00 3134.13 885.95 11068.30 00:27:05.304 [2024-12-06T18:25:50.353Z] =================================================================================================================== 00:27:05.304 [2024-12-06T18:25:50.353Z] Total : 5097.77 637.22 0.00 0.00 3134.13 885.95 11068.30 00:27:05.304 { 00:27:05.304 "results": [ 00:27:05.304 { 00:27:05.304 "job": "nvme0n1", 00:27:05.304 "core_mask": "0x2", 00:27:05.304 "workload": "randread", 00:27:05.304 "status": "finished", 00:27:05.304 "queue_depth": 16, 00:27:05.304 "io_size": 131072, 00:27:05.304 "runtime": 2.004993, 00:27:05.304 "iops": 5097.773408685217, 00:27:05.304 "mibps": 637.2216760856521, 00:27:05.304 "io_failed": 0, 00:27:05.304 "io_timeout": 0, 00:27:05.304 "avg_latency_us": 3134.134399547772, 00:27:05.304 "min_latency_us": 885.9496296296296, 00:27:05.304 "max_latency_us": 11068.302222222223 00:27:05.304 } 00:27:05.304 ], 00:27:05.304 "core_count": 1 00:27:05.304 } 00:27:05.304 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:05.304 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:05.304 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:05.304 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:05.304 | select(.opcode=="crc32c") 00:27:05.304 | "\(.module_name) \(.executed)"' 00:27:05.304 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:05.565 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:05.565 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:05.565 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:05.565 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:05.565 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 320664 00:27:05.565 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 320664 ']' 00:27:05.565 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 320664 00:27:05.565 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:05.565 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:05.565 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 320664 00:27:05.565 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:05.565 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:05.565 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 320664' 00:27:05.565 killing process with pid 320664 00:27:05.565 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 320664 00:27:05.565 Received shutdown signal, test time was about 2.000000 seconds 00:27:05.565 00:27:05.565 Latency(us) 00:27:05.565 [2024-12-06T18:25:50.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.565 [2024-12-06T18:25:50.614Z] =================================================================================================================== 00:27:05.565 [2024-12-06T18:25:50.614Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:05.565 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 320664 00:27:05.824 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:05.824 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:05.824 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:05.824 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:05.824 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:05.824 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:05.824 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:05.824 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=321081 00:27:05.824 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:05.824 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 321081 /var/tmp/bperf.sock 00:27:05.824 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 321081 ']' 00:27:05.824 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:05.824 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:05.824 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:05.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:05.824 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:05.824 19:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:05.824 [2024-12-06 19:25:50.868165] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:27:05.824 [2024-12-06 19:25:50.868238] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321081 ] 00:27:06.083 [2024-12-06 19:25:50.937395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.083 [2024-12-06 19:25:50.992291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.083 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.083 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:06.083 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:06.083 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:06.083 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:06.650 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:06.650 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:06.910 nvme0n1 00:27:06.910 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:06.910 19:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:07.170 Running I/O for 2 seconds... 00:27:09.048 21485.00 IOPS, 83.93 MiB/s [2024-12-06T18:25:54.097Z] 21366.50 IOPS, 83.46 MiB/s 00:27:09.048 Latency(us) 00:27:09.048 [2024-12-06T18:25:54.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.048 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:09.048 nvme0n1 : 2.01 21369.46 83.47 0.00 0.00 5978.24 4684.61 11553.75 00:27:09.048 [2024-12-06T18:25:54.097Z] =================================================================================================================== 00:27:09.048 [2024-12-06T18:25:54.097Z] Total : 21369.46 83.47 0.00 0.00 5978.24 4684.61 11553.75 00:27:09.048 { 00:27:09.048 "results": [ 00:27:09.048 { 00:27:09.048 "job": "nvme0n1", 00:27:09.048 "core_mask": "0x2", 00:27:09.048 "workload": "randwrite", 00:27:09.048 "status": "finished", 00:27:09.048 "queue_depth": 128, 00:27:09.048 "io_size": 4096, 00:27:09.048 "runtime": 2.005713, 00:27:09.048 "iops": 21369.45814281505, 00:27:09.048 "mibps": 83.47444587037128, 00:27:09.048 "io_failed": 0, 00:27:09.048 "io_timeout": 0, 00:27:09.048 "avg_latency_us": 5978.238299412312, 00:27:09.048 "min_latency_us": 4684.61037037037, 00:27:09.048 "max_latency_us": 11553.754074074073 00:27:09.048 } 00:27:09.048 ], 00:27:09.048 "core_count": 1 00:27:09.048 } 00:27:09.048 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:09.048 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:09.048 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:09.048 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:09.048 | select(.opcode=="crc32c") 00:27:09.048 | "\(.module_name) \(.executed)"' 00:27:09.048 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:09.307 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:09.307 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:09.307 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:09.307 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:09.307 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 321081 00:27:09.307 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 321081 ']' 00:27:09.307 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 321081 00:27:09.307 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:09.307 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:09.307 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 321081 00:27:09.566 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:09.566 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:09.566 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 321081' 00:27:09.566 killing process with pid 321081 00:27:09.566 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 321081 00:27:09.566 Received shutdown signal, test time was about 2.000000 seconds 00:27:09.566 00:27:09.566 Latency(us) 00:27:09.566 [2024-12-06T18:25:54.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.566 [2024-12-06T18:25:54.615Z] =================================================================================================================== 00:27:09.566 [2024-12-06T18:25:54.615Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:09.566 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 321081 00:27:09.824 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:09.824 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:09.824 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:09.824 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:09.824 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:09.824 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:09.824 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:09.824 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=321491 00:27:09.824 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:09.824 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 321491 /var/tmp/bperf.sock 00:27:09.824 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 321491 ']' 00:27:09.824 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:09.824 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.824 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:09.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:09.824 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.824 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:09.824 [2024-12-06 19:25:54.666540] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:27:09.824 [2024-12-06 19:25:54.666617] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321491 ] 00:27:09.824 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:09.824 Zero copy mechanism will not be used. 00:27:09.824 [2024-12-06 19:25:54.731957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.824 [2024-12-06 19:25:54.791311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.083 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.083 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:10.083 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:10.083 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:10.083 19:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:10.341 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:10.341 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:10.598 nvme0n1 00:27:10.598 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:10.598 19:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:10.856 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:10.856 Zero copy mechanism will not be used. 00:27:10.856 Running I/O for 2 seconds... 00:27:12.730 4846.00 IOPS, 605.75 MiB/s [2024-12-06T18:25:57.779Z] 4792.00 IOPS, 599.00 MiB/s 00:27:12.730 Latency(us) 00:27:12.730 [2024-12-06T18:25:57.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.730 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:12.730 nvme0n1 : 2.00 4791.19 598.90 0.00 0.00 3332.27 2378.71 10825.58 00:27:12.730 [2024-12-06T18:25:57.779Z] =================================================================================================================== 00:27:12.730 [2024-12-06T18:25:57.779Z] Total : 4791.19 598.90 0.00 0.00 3332.27 2378.71 10825.58 00:27:12.730 { 00:27:12.730 "results": [ 00:27:12.730 { 00:27:12.730 "job": "nvme0n1", 00:27:12.730 "core_mask": "0x2", 00:27:12.730 "workload": "randwrite", 00:27:12.730 "status": "finished", 00:27:12.730 "queue_depth": 16, 00:27:12.730 "io_size": 131072, 00:27:12.730 "runtime": 2.004305, 00:27:12.730 "iops": 4791.186970046974, 00:27:12.730 "mibps": 598.8983712558718, 00:27:12.730 "io_failed": 0, 00:27:12.730 "io_timeout": 0, 00:27:12.730 "avg_latency_us": 3332.2676638858998, 00:27:12.730 "min_latency_us": 2378.7140740740742, 00:27:12.730 "max_latency_us": 10825.576296296296 00:27:12.730 } 00:27:12.730 ], 00:27:12.730 "core_count": 1 00:27:12.730 } 00:27:12.730 19:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:12.730 19:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:12.730 19:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:12.730 19:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:12.730 | select(.opcode=="crc32c") 00:27:12.730 | "\(.module_name) \(.executed)"' 00:27:12.730 19:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:12.988 19:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:12.988 19:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:12.988 19:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:12.988 19:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:12.988 19:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 321491 00:27:12.988 19:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 321491 ']' 00:27:12.988 19:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 321491 00:27:12.988 19:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:12.989 19:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:12.989 19:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 321491 00:27:12.989 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:12.989 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:12.989 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 321491' 00:27:12.989 killing process with pid 321491 00:27:12.989 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 321491 00:27:12.989 Received shutdown signal, test time was about 2.000000 seconds 00:27:12.989 00:27:12.989 Latency(us) 00:27:12.989 [2024-12-06T18:25:58.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.989 [2024-12-06T18:25:58.038Z] =================================================================================================================== 00:27:12.989 [2024-12-06T18:25:58.038Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:12.989 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 321491 00:27:13.247 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 320113 00:27:13.247 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 320113 ']' 00:27:13.247 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 320113 00:27:13.247 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:13.247 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:13.247 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 320113 00:27:13.247 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:13.247 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:13.247 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 320113' 00:27:13.247 killing process with pid 320113 00:27:13.247 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 320113 00:27:13.247 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 320113 00:27:13.506 00:27:13.506 real 0m15.536s 00:27:13.506 user 0m30.445s 00:27:13.506 sys 0m5.085s 00:27:13.506 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:13.506 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:13.506 ************************************ 00:27:13.506 END TEST nvmf_digest_clean 00:27:13.506 ************************************ 00:27:13.506 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:13.506 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:13.506 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:13.506 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:13.763 ************************************ 00:27:13.763 START TEST nvmf_digest_error 00:27:13.763 ************************************ 00:27:13.763 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:13.764 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:13.764 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:13.764 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:13.764 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:13.764 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=322048 00:27:13.764 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:13.764 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 322048 00:27:13.764 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 322048 ']' 00:27:13.764 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.764 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:13.764 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.764 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:13.764 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:13.764 [2024-12-06 19:25:58.615946] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:27:13.764 [2024-12-06 19:25:58.616037] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.764 [2024-12-06 19:25:58.686691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.764 [2024-12-06 19:25:58.741700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.764 [2024-12-06 19:25:58.741763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.764 [2024-12-06 19:25:58.741792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.764 [2024-12-06 19:25:58.741804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.764 [2024-12-06 19:25:58.741815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.764 [2024-12-06 19:25:58.742457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.021 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:14.021 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:14.021 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:14.021 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:14.021 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.021 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.021 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:14.021 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.021 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.021 [2024-12-06 19:25:58.867205] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:14.021 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.021 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:14.021 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:14.021 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.021 19:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.021 null0 00:27:14.021 [2024-12-06 19:25:58.990252] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.021 [2024-12-06 19:25:59.014479] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.021 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.021 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:14.021 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:14.021 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:14.021 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:14.021 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:14.021 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=322074 00:27:14.021 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 322074 /var/tmp/bperf.sock 00:27:14.021 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 322074 ']' 00:27:14.021 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:14.021 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:14.021 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.021 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:14.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:14.021 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.021 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.021 [2024-12-06 19:25:59.068904] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:27:14.021 [2024-12-06 19:25:59.068986] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322074 ] 00:27:14.278 [2024-12-06 19:25:59.144269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.278 [2024-12-06 19:25:59.206997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.278 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:14.278 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:14.278 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:14.278 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:14.841 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:14.841 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.841 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.841 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.841 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:14.841 19:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:15.099 nvme0n1 00:27:15.099 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:15.099 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.099 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.099 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.099 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:15.099 19:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:15.366 Running I/O for 2 seconds... 00:27:15.366 [2024-12-06 19:26:00.294506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.366 [2024-12-06 19:26:00.294573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.366 [2024-12-06 19:26:00.294594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.366 [2024-12-06 19:26:00.306056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.366 [2024-12-06 19:26:00.306101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.366 [2024-12-06 19:26:00.306118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.366 [2024-12-06 19:26:00.321144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.366 [2024-12-06 19:26:00.321188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.366 [2024-12-06 19:26:00.321206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.366 [2024-12-06 19:26:00.335704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.366 [2024-12-06 19:26:00.335761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.366 [2024-12-06 19:26:00.335789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.366 [2024-12-06 19:26:00.348824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.366 [2024-12-06 19:26:00.348854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.366 [2024-12-06 19:26:00.348887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.366 [2024-12-06 19:26:00.359742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.366 [2024-12-06 19:26:00.359773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.366 [2024-12-06 19:26:00.359806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.366 [2024-12-06 19:26:00.373869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.366 [2024-12-06 19:26:00.373900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.366 [2024-12-06 19:26:00.373934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.366 [2024-12-06 19:26:00.387253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.366 [2024-12-06 19:26:00.387281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.366 [2024-12-06 19:26:00.387312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.366 [2024-12-06 19:26:00.403889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.366 [2024-12-06 19:26:00.403920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.366 [2024-12-06 19:26:00.403954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.629 [2024-12-06 19:26:00.418884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.418916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.418948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.434909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.434938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.434977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.445522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.445550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.445582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.458179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.458213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.458246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.470181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.470210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.470245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.482992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.483039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.483056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.496082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.496110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.496142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.507257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.507286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.507318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.518947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.518986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.519003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.531594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.531623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.531654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.543244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.543271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.543302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.557397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.557427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.557459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.567573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.567602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.567633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.581978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.582025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.582042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.595957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.595989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.596007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.610144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.610172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.610204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.622065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.622093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.622124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.634081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.634116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.634147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.647987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.648015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.648037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.658607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.658636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.658667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.630 [2024-12-06 19:26:00.672709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.630 [2024-12-06 19:26:00.672761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.630 [2024-12-06 19:26:00.672792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.888 [2024-12-06 19:26:00.685995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.888 [2024-12-06 19:26:00.686038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.888 [2024-12-06 19:26:00.686054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.888 [2024-12-06 19:26:00.696859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.888 [2024-12-06 19:26:00.696889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.888 [2024-12-06 19:26:00.696920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.888 [2024-12-06 19:26:00.710585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.888 [2024-12-06 19:26:00.710612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.888 [2024-12-06 19:26:00.710644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.888 [2024-12-06 19:26:00.726294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.888 [2024-12-06 19:26:00.726326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.888 [2024-12-06 19:26:00.726358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.888 [2024-12-06 19:26:00.740551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.888 [2024-12-06 19:26:00.740579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.888 [2024-12-06 19:26:00.740611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.888 [2024-12-06 19:26:00.751419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.888 [2024-12-06 19:26:00.751454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.888 [2024-12-06 19:26:00.751485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.888 [2024-12-06 19:26:00.764307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.888 [2024-12-06 19:26:00.764363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.888 [2024-12-06 19:26:00.764380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.888 [2024-12-06 19:26:00.775665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.888 [2024-12-06 19:26:00.775692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.888 [2024-12-06 19:26:00.775731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.888 [2024-12-06 19:26:00.789990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.888 [2024-12-06 19:26:00.790027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.888 [2024-12-06 19:26:00.790060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.888 [2024-12-06 19:26:00.805654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.888 [2024-12-06 19:26:00.805681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.888 [2024-12-06 19:26:00.805712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.888 [2024-12-06 19:26:00.818859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.888 [2024-12-06 19:26:00.818889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.888 [2024-12-06 19:26:00.818923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.888 [2024-12-06 19:26:00.833184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.888 [2024-12-06 19:26:00.833212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.888 [2024-12-06 19:26:00.833244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.888 [2024-12-06 19:26:00.844140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.888 [2024-12-06 19:26:00.844167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.888 [2024-12-06 19:26:00.844198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.888 [2024-12-06 19:26:00.857472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.888 [2024-12-06 19:26:00.857499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.888 [2024-12-06 19:26:00.857530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.888 [2024-12-06 19:26:00.870259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.888 [2024-12-06 19:26:00.870286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.888 [2024-12-06 19:26:00.870317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.888 [2024-12-06 19:26:00.881661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.888 [2024-12-06 19:26:00.881688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.888 [2024-12-06 19:26:00.881719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.889 [2024-12-06 19:26:00.893897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.889 [2024-12-06 19:26:00.893939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.889 [2024-12-06 19:26:00.893957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.889 [2024-12-06 19:26:00.905590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.889 [2024-12-06 19:26:00.905618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.889 [2024-12-06 19:26:00.905648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.889 [2024-12-06 19:26:00.918846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.889 [2024-12-06 19:26:00.918875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.889 [2024-12-06 19:26:00.918907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.889 [2024-12-06 19:26:00.929499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:15.889 [2024-12-06 19:26:00.929527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.889 [2024-12-06 19:26:00.929559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:00.942040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:00.942070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:00.942101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:00.956580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:00.956608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:00.956639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:00.969248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:00.969276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:00.969307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:00.980240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:00.980267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:00.980298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:00.991154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:00.991182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:00.991213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:01.003998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:01.004039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:01.004060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:01.016050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:01.016078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:01.016109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:01.030126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:01.030155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:01.030186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:01.040800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:01.040828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:01.040860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:01.054731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:01.054763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:01.054780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:01.066266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:01.066294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:01.066326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:01.080699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:01.080759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:01.080791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:01.092980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:01.093027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:01.093044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:01.103800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:01.103829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:01.103862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:01.115911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:01.115945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:01.115977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:01.127432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:01.127459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:01.127490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:01.139144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:01.139175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:01.139206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:01.151484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:01.151511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:01.151543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:01.163409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:01.163435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:01.163466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:01.173565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:01.173592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:01.173624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.147 [2024-12-06 19:26:01.186920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.147 [2024-12-06 19:26:01.186948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.147 [2024-12-06 19:26:01.186978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.199154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.199183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.199220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.215600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.215628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.215660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.225313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.225341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.225372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.239420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.239447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.239478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.254211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.254238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.254269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.266613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.266640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.266672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.276950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.276983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.277014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 19911.00 IOPS, 77.78 MiB/s [2024-12-06T18:26:01.454Z] [2024-12-06 19:26:01.291030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.291073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.291089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.303616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.303644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.303678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.315109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.315137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.315170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.327024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.327057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.327088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.341399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.341428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.341458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.353649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.353678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.353710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.365572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.365599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.365630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.378146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.378175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.378206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.391917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.391945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.391976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.402053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.402082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.402114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.414188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.414215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.414246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.425226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.425254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.425290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.436816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.436870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.436887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.405 [2024-12-06 19:26:01.447845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.405 [2024-12-06 19:26:01.447873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.405 [2024-12-06 19:26:01.447903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.664 [2024-12-06 19:26:01.459406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.664 [2024-12-06 19:26:01.459435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.664 [2024-12-06 19:26:01.459466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.664 [2024-12-06 19:26:01.471382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.664 [2024-12-06 19:26:01.471409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.664 [2024-12-06 19:26:01.471444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.664 [2024-12-06 19:26:01.482734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.664 [2024-12-06 19:26:01.482762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.664 [2024-12-06 19:26:01.482793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.664 [2024-12-06 19:26:01.494122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.664 [2024-12-06 19:26:01.494148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.664 [2024-12-06 19:26:01.494179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.664 [2024-12-06 19:26:01.507552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.664 [2024-12-06 19:26:01.507579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.664 [2024-12-06 19:26:01.507609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.664 [2024-12-06 19:26:01.518573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.664 [2024-12-06 19:26:01.518600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.664 [2024-12-06 19:26:01.518631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.664 [2024-12-06 19:26:01.530449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.664 [2024-12-06 19:26:01.530477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.664 [2024-12-06 19:26:01.530514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.664 [2024-12-06 19:26:01.543021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.665 [2024-12-06 19:26:01.543064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.665 [2024-12-06 19:26:01.543089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.665 [2024-12-06 19:26:01.555863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.665 [2024-12-06 19:26:01.555906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.665 [2024-12-06 19:26:01.555924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.665 [2024-12-06 19:26:01.568921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.665 [2024-12-06 19:26:01.568966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.665 [2024-12-06 19:26:01.568983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.665 [2024-12-06 19:26:01.579009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.665 [2024-12-06 19:26:01.579038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.665 [2024-12-06 19:26:01.579068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.665 [2024-12-06 19:26:01.590895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.665 [2024-12-06 19:26:01.590924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.665 [2024-12-06 19:26:01.590955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.665 [2024-12-06 19:26:01.607121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.665 [2024-12-06 19:26:01.607150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.665 [2024-12-06 19:26:01.607181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.665 [2024-12-06 19:26:01.622111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.665 [2024-12-06 19:26:01.622139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.665 [2024-12-06 19:26:01.622170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.665 [2024-12-06 19:26:01.636374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.665 [2024-12-06 19:26:01.636414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.665 [2024-12-06 19:26:01.636445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.665 [2024-12-06 19:26:01.646077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.665 [2024-12-06 19:26:01.646121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.665 [2024-12-06 19:26:01.646154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.665 [2024-12-06 19:26:01.659243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.665 [2024-12-06 19:26:01.659272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.665 [2024-12-06 19:26:01.659303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.665 [2024-12-06 19:26:01.673831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.665 [2024-12-06 19:26:01.673861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.665 [2024-12-06 19:26:01.673894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.665 [2024-12-06 19:26:01.688319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.665 [2024-12-06 19:26:01.688348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.665 [2024-12-06 19:26:01.688379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.665 [2024-12-06 19:26:01.698826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.665 [2024-12-06 19:26:01.698857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.665 [2024-12-06 19:26:01.698876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.665 [2024-12-06 19:26:01.711673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.665 [2024-12-06 19:26:01.711730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.665 [2024-12-06 19:26:01.711752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.723030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.723062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.723079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.735267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.735295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.735328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.748310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.748338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.748370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.759648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.759676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.759708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.771362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.771405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.771422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.783947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.783977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.783995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.795063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.795107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.795123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.809589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.809617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.809647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.824475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.824503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.824534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.838246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.838274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.838305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.853492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.853520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.853550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.868507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.868535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.868571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.884256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.884285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.884318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.894572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.894600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.894631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.906192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.906220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.906251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.918963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.918992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.919009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.931264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.931292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.931323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.943089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.943117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.943148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.953812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.953840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.953856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.927 [2024-12-06 19:26:01.966789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:16.927 [2024-12-06 19:26:01.966818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.927 [2024-12-06 19:26:01.966834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:01.976726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:01.976757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:01.976773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:01.991565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:01.991594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:01.991626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:02.005086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:02.005114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:02.005145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:02.020222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:02.020250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:02.020282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:02.031303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:02.031332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:02.031364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:02.043612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:02.043639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:02.043671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:02.057060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:02.057089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:02.057121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:02.070630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:02.070659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:02.070691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:02.081218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:02.081245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:02.081281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:02.095858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:02.095887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:02.095903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:02.109651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:02.109679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:02.109710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:02.122935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:02.122964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:02.122981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:02.136763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:02.136799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:02.136817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:02.152230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:02.152268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:02.152298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:02.163657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:02.163685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:02.163716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:02.178025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:02.178054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:02.178086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:02.193777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:02.193806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:02.193822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:02.209096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:02.209129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:02.209162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.189 [2024-12-06 19:26:02.224153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.189 [2024-12-06 19:26:02.224181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.189 [2024-12-06 19:26:02.224213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.448 [2024-12-06 19:26:02.237567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.448 [2024-12-06 19:26:02.237597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.448 [2024-12-06 19:26:02.237628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.448 [2024-12-06 19:26:02.248920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.448 [2024-12-06 19:26:02.248950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.448 [2024-12-06 19:26:02.248967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.448 [2024-12-06 19:26:02.263593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.448 [2024-12-06 19:26:02.263621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.448 [2024-12-06 19:26:02.263653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.448 [2024-12-06 19:26:02.276970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ccc50) 00:27:17.448 [2024-12-06 19:26:02.276999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.448 [2024-12-06 19:26:02.277016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.448 19937.50 IOPS, 77.88 MiB/s 00:27:17.449 Latency(us) 00:27:17.449 [2024-12-06T18:26:02.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.449 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:17.449 nvme0n1 : 2.01 19933.20 77.86 0.00 0.00 6415.13 2876.30 22330.79 00:27:17.449 [2024-12-06T18:26:02.498Z] =================================================================================================================== 00:27:17.449 [2024-12-06T18:26:02.498Z] Total : 19933.20 77.86 0.00 0.00 6415.13 2876.30 22330.79 00:27:17.449 { 00:27:17.449 "results": [ 00:27:17.449 { 00:27:17.449 "job": "nvme0n1", 00:27:17.449 "core_mask": "0x2", 00:27:17.449 "workload": "randread", 00:27:17.449 "status": "finished", 00:27:17.449 "queue_depth": 128, 00:27:17.449 "io_size": 4096, 00:27:17.449 "runtime": 2.006853, 00:27:17.449 "iops": 19933.198893989746, 00:27:17.449 "mibps": 77.86405817964744, 00:27:17.449 "io_failed": 0, 00:27:17.449 "io_timeout": 0, 00:27:17.449 "avg_latency_us": 6415.132817038722, 00:27:17.449 "min_latency_us": 2876.302222222222, 00:27:17.449 "max_latency_us": 22330.785185185185 00:27:17.449 } 00:27:17.449 ], 00:27:17.449 "core_count": 1 00:27:17.449 } 00:27:17.449 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:17.449 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:17.449 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:17.449 | .driver_specific 00:27:17.449 | .nvme_error 00:27:17.449 | .status_code 00:27:17.449 | .command_transient_transport_error' 00:27:17.449 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:17.707 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 156 > 0 )) 00:27:17.707 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 322074 00:27:17.707 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 322074 ']' 00:27:17.707 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 322074 00:27:17.707 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:17.707 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:17.707 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 322074 00:27:17.707 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:17.707 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:17.707 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 322074' 00:27:17.707 killing process with pid 322074 00:27:17.707 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 322074 00:27:17.707 Received shutdown signal, test time was about 2.000000 seconds 00:27:17.707 00:27:17.707 Latency(us) 00:27:17.707 [2024-12-06T18:26:02.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.707 [2024-12-06T18:26:02.756Z] =================================================================================================================== 00:27:17.707 [2024-12-06T18:26:02.756Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:17.707 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 322074 00:27:17.966 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:17.966 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:17.966 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:17.966 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:17.966 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:17.966 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=322586 00:27:17.966 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:17.966 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 322586 /var/tmp/bperf.sock 00:27:17.966 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 322586 ']' 00:27:17.966 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:17.966 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:17.966 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:17.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:17.966 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:17.966 19:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:17.966 [2024-12-06 19:26:02.858418] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:27:17.966 [2024-12-06 19:26:02.858495] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322586 ] 00:27:17.966 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:17.966 Zero copy mechanism will not be used. 00:27:17.966 [2024-12-06 19:26:02.927211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.966 [2024-12-06 19:26:02.985518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.224 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.224 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:18.224 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:18.224 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:18.482 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:18.482 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.482 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.482 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.482 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:18.482 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:18.741 nvme0n1 00:27:18.741 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:18.741 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.741 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.741 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.741 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:18.741 19:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:19.001 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:19.001 Zero copy mechanism will not be used. 00:27:19.001 Running I/O for 2 seconds... 00:27:19.001 [2024-12-06 19:26:03.913692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.001 [2024-12-06 19:26:03.913800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.001 [2024-12-06 19:26:03.913821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.001 [2024-12-06 19:26:03.920428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:03.920459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:03.920490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:03.926968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:03.927013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:03.927031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:03.931524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:03.931553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:03.931585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:03.937325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:03.937353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:03.937386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:03.943556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:03.943585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:03.943616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:03.949648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:03.949676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:03.949715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:03.954948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:03.954978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:03.954996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:03.960448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:03.960475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:03.960505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:03.965972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:03.966017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:03.966034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:03.971776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:03.971805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:03.971829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:03.978367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:03.978394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:03.978425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:03.984556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:03.984585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:03.984615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:03.991013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:03.991056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:03.991072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:03.997797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:03.997826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:03.997842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:04.004234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:04.004277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:04.004294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:04.010518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:04.010546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:04.010578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:04.016872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:04.016902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:04.016925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:04.024337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:04.024366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:04.024397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:04.031808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:04.031837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:04.031853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:04.038487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:04.038515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:04.038545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.002 [2024-12-06 19:26:04.045636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.002 [2024-12-06 19:26:04.045665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.002 [2024-12-06 19:26:04.045696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.263 [2024-12-06 19:26:04.052481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.263 [2024-12-06 19:26:04.052515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-12-06 19:26:04.052552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.263 [2024-12-06 19:26:04.059929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.263 [2024-12-06 19:26:04.059959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-12-06 19:26:04.059975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.263 [2024-12-06 19:26:04.066824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.263 [2024-12-06 19:26:04.066854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-12-06 19:26:04.066870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.263 [2024-12-06 19:26:04.073632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.263 [2024-12-06 19:26:04.073660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.263 [2024-12-06 19:26:04.073691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.263 [2024-12-06 19:26:04.080770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.080799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.080814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.087908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.087937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.087961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.094527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.094556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.094586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.100754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.100783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.100799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.107106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.107134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.107173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.113901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.113930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.113946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.118113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.118140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.118171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.125889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.125919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.125935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.132088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.132129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.132144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.139084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.139112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.139143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.146127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.146179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.146197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.153326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.153368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.153384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.160547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.160574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.160605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.167463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.167490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.167520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.174917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.174948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.174965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.182289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.182317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.182349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.189544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.189572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.189603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.196438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.196466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.196497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.204177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.204206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.204237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.211212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.211239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.211269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.264 [2024-12-06 19:26:04.217915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.264 [2024-12-06 19:26:04.217946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.264 [2024-12-06 19:26:04.217963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.265 [2024-12-06 19:26:04.224518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.265 [2024-12-06 19:26:04.224546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-12-06 19:26:04.224583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.265 [2024-12-06 19:26:04.231418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.265 [2024-12-06 19:26:04.231445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-12-06 19:26:04.231477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.265 [2024-12-06 19:26:04.238668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.265 [2024-12-06 19:26:04.238695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-12-06 19:26:04.238736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.265 [2024-12-06 19:26:04.245820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.265 [2024-12-06 19:26:04.245850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-12-06 19:26:04.245867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.265 [2024-12-06 19:26:04.253286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.265 [2024-12-06 19:26:04.253313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-12-06 19:26:04.253344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.265 [2024-12-06 19:26:04.260542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.265 [2024-12-06 19:26:04.260570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-12-06 19:26:04.260601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.265 [2024-12-06 19:26:04.267967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.265 [2024-12-06 19:26:04.268010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-12-06 19:26:04.268032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.265 [2024-12-06 19:26:04.275158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.265 [2024-12-06 19:26:04.275186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-12-06 19:26:04.275217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.265 [2024-12-06 19:26:04.283023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.265 [2024-12-06 19:26:04.283064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-12-06 19:26:04.283079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.265 [2024-12-06 19:26:04.290200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.265 [2024-12-06 19:26:04.290227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-12-06 19:26:04.290257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.265 [2024-12-06 19:26:04.296673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.265 [2024-12-06 19:26:04.296714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-12-06 19:26:04.296739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.265 [2024-12-06 19:26:04.303509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.265 [2024-12-06 19:26:04.303535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-12-06 19:26:04.303566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.265 [2024-12-06 19:26:04.310425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.265 [2024-12-06 19:26:04.310453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.265 [2024-12-06 19:26:04.310490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.526 [2024-12-06 19:26:04.317323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.526 [2024-12-06 19:26:04.317355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.526 [2024-12-06 19:26:04.317386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.526 [2024-12-06 19:26:04.324541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.324582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.324599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.331693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.331748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.331766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.335564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.335590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.335619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.342377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.342404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.342435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.349285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.349312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.349341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.356338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.356365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.356398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.363332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.363358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.363390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.370292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.370318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.370357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.376314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.376342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.376371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.381579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.381606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.381636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.386943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.386970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.386986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.392234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.392266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.392297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.398200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.398228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.398259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.404153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.404182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.404213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.410449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.410476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.410508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.416303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.416332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.416362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.422305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.422341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.422373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.428444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.428473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.428505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.434604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.434636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.434674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.441024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.441054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.441084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.447634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.447662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.447694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.454246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.527 [2024-12-06 19:26:04.454289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.527 [2024-12-06 19:26:04.454306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.527 [2024-12-06 19:26:04.461550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.528 [2024-12-06 19:26:04.461579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.528 [2024-12-06 19:26:04.461611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.528 [2024-12-06 19:26:04.468662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.528 [2024-12-06 19:26:04.468690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.528 [2024-12-06 19:26:04.468740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.528 [2024-12-06 19:26:04.475922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.528 [2024-12-06 19:26:04.475951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.528 [2024-12-06 19:26:04.475982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.528 [2024-12-06 19:26:04.482863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.528 [2024-12-06 19:26:04.482894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.528 [2024-12-06 19:26:04.482912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.528 [2024-12-06 19:26:04.489566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.528 [2024-12-06 19:26:04.489595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.528 [2024-12-06 19:26:04.489627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.528 [2024-12-06 19:26:04.496222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.528 [2024-12-06 19:26:04.496250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.528 [2024-12-06 19:26:04.496280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.528 [2024-12-06 19:26:04.502708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.528 [2024-12-06 19:26:04.502758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.528 [2024-12-06 19:26:04.502775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.528 [2024-12-06 19:26:04.509341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.528 [2024-12-06 19:26:04.509368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.528 [2024-12-06 19:26:04.509400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.528 [2024-12-06 19:26:04.515830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.528 [2024-12-06 19:26:04.515859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.528 [2024-12-06 19:26:04.515875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.528 [2024-12-06 19:26:04.522416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.528 [2024-12-06 19:26:04.522443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.528 [2024-12-06 19:26:04.522474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.528 [2024-12-06 19:26:04.528738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.528 [2024-12-06 19:26:04.528781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.528 [2024-12-06 19:26:04.528798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.528 [2024-12-06 19:26:04.535363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.528 [2024-12-06 19:26:04.535391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.528 [2024-12-06 19:26:04.535422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.528 [2024-12-06 19:26:04.542207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.528 [2024-12-06 19:26:04.542234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.528 [2024-12-06 19:26:04.542264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.528 [2024-12-06 19:26:04.548961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.528 [2024-12-06 19:26:04.548990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.528 [2024-12-06 19:26:04.549014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.528 [2024-12-06 19:26:04.555924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.528 [2024-12-06 19:26:04.555952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.528 [2024-12-06 19:26:04.555968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.528 [2024-12-06 19:26:04.562910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.528 [2024-12-06 19:26:04.562939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.528 [2024-12-06 19:26:04.562954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.528 [2024-12-06 19:26:04.569928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.528 [2024-12-06 19:26:04.569958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.528 [2024-12-06 19:26:04.569974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.789 [2024-12-06 19:26:04.576893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.789 [2024-12-06 19:26:04.576925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.789 [2024-12-06 19:26:04.576942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.789 [2024-12-06 19:26:04.583768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.789 [2024-12-06 19:26:04.583804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.789 [2024-12-06 19:26:04.583820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.789 [2024-12-06 19:26:04.590697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.789 [2024-12-06 19:26:04.590746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.789 [2024-12-06 19:26:04.590765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.789 [2024-12-06 19:26:04.597632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.789 [2024-12-06 19:26:04.597659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.789 [2024-12-06 19:26:04.597691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.789 [2024-12-06 19:26:04.604358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.604386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.604416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.608174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.608209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.608241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.614849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.614878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.614894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.621362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.621391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.621422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.628842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.628872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.628889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.636115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.636143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.636174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.643047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.643075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.643106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.649466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.649494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.649524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.655974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.656017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.656032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.662647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.662675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.662706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.669353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.669382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.669414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.676325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.676353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.676384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.683602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.683633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.683665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.690906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.690937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.690953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.697930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.697961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.697978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.704516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.704544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.704576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.711355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.711383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.711415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.718462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.718491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.718522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.725345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.725373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.725410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.732482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.732509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.732540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.739226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.739253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.739283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.746214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.746241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.746273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.753302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.753330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.753360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.760413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.760441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.760472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.767320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.767347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.767379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.774161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.774189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.790 [2024-12-06 19:26:04.774219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.790 [2024-12-06 19:26:04.777979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.790 [2024-12-06 19:26:04.778022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.791 [2024-12-06 19:26:04.778038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.791 [2024-12-06 19:26:04.785386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.791 [2024-12-06 19:26:04.785425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.791 [2024-12-06 19:26:04.785456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.791 [2024-12-06 19:26:04.793122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.791 [2024-12-06 19:26:04.793165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.791 [2024-12-06 19:26:04.793182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.791 [2024-12-06 19:26:04.799922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.791 [2024-12-06 19:26:04.799950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.791 [2024-12-06 19:26:04.799966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:19.791 [2024-12-06 19:26:04.806660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.791 [2024-12-06 19:26:04.806687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.791 [2024-12-06 19:26:04.806718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.791 [2024-12-06 19:26:04.813949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.791 [2024-12-06 19:26:04.813977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.791 [2024-12-06 19:26:04.813993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.791 [2024-12-06 19:26:04.822221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.791 [2024-12-06 19:26:04.822261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.791 [2024-12-06 19:26:04.822292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:19.791 [2024-12-06 19:26:04.830406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:19.791 [2024-12-06 19:26:04.830450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.791 [2024-12-06 19:26:04.830468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.050 [2024-12-06 19:26:04.838666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.838697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.838746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.847353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.847398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.847426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.855877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.855907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.855924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.865301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.865330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.865362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.874018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.874049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.874079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.883934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.883966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.883983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.893691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.893745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.893763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.902343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.902373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.902403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.051 4500.00 IOPS, 562.50 MiB/s [2024-12-06T18:26:05.100Z] [2024-12-06 19:26:04.911779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.911809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.911825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.919328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.919371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.919388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.926759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.926801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.926818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.934424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.934462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.934494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.942155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.942198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.942214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.950122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.950151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.950183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.956977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.957024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.957041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.964070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.964115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.964133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.970591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.970635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.970653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.977363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.977393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.977425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.984348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.984378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.984410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:04.992035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:04.992068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:04.992100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:05.001012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:05.001043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:05.001059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:05.005209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:05.005239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:05.005270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:05.012075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:05.012119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:05.012136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:05.018951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.051 [2024-12-06 19:26:05.018993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.051 [2024-12-06 19:26:05.019011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.051 [2024-12-06 19:26:05.024928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.052 [2024-12-06 19:26:05.024959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.052 [2024-12-06 19:26:05.024975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.052 [2024-12-06 19:26:05.031522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.052 [2024-12-06 19:26:05.031552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.052 [2024-12-06 19:26:05.031583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.052 [2024-12-06 19:26:05.037270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.052 [2024-12-06 19:26:05.037299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.052 [2024-12-06 19:26:05.037329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.052 [2024-12-06 19:26:05.043735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.052 [2024-12-06 19:26:05.043766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.052 [2024-12-06 19:26:05.043797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.052 [2024-12-06 19:26:05.050463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.052 [2024-12-06 19:26:05.050493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.052 [2024-12-06 19:26:05.050525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.052 [2024-12-06 19:26:05.057144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.052 [2024-12-06 19:26:05.057189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.052 [2024-12-06 19:26:05.057206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.052 [2024-12-06 19:26:05.063778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.052 [2024-12-06 19:26:05.063808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.052 [2024-12-06 19:26:05.063826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.052 [2024-12-06 19:26:05.070372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.052 [2024-12-06 19:26:05.070402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.052 [2024-12-06 19:26:05.070434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.052 [2024-12-06 19:26:05.076435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.052 [2024-12-06 19:26:05.076464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.052 [2024-12-06 19:26:05.076496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.052 [2024-12-06 19:26:05.083456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.052 [2024-12-06 19:26:05.083486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.052 [2024-12-06 19:26:05.083518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.052 [2024-12-06 19:26:05.089842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.052 [2024-12-06 19:26:05.089873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.052 [2024-12-06 19:26:05.089890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.052 [2024-12-06 19:26:05.096250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.052 [2024-12-06 19:26:05.096294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.052 [2024-12-06 19:26:05.096313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.310 [2024-12-06 19:26:05.102698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.310 [2024-12-06 19:26:05.102777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.102796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.109183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.109212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.109243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.115560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.115589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.115621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.119354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.119382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.119413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.125443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.125471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.125502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.131773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.131803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.131820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.137953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.137992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.138009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.143929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.143958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.143974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.149560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.149588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.149627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.155639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.155667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.155699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.162233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.162276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.162293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.168021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.168065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.168081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.173685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.173736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.173754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.179457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.179503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.179521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.185968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.186014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.186031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.193674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.193718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.193744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.202269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.202306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.202339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.210189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.210227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.210259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.218088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.218118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.218158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.227036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.227067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.227084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.235937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.235966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.235982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.244977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.245006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.245023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.254363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.254393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.254424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.263718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.263756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.263776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.272575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.272605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.272636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.282259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.282288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.282319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.291370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.291400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.291432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.300871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.300902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.300919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.309615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.309645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.309677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.316920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.316949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.316965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.324228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.324256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.324287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.331578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.331606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.331638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.339034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.339063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.339080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.345944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.345974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.311 [2024-12-06 19:26:05.345990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.311 [2024-12-06 19:26:05.352108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.311 [2024-12-06 19:26:05.352135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.312 [2024-12-06 19:26:05.352175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.312 [2024-12-06 19:26:05.359006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.312 [2024-12-06 19:26:05.359034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.312 [2024-12-06 19:26:05.359050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.571 [2024-12-06 19:26:05.365185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.571 [2024-12-06 19:26:05.365214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.571 [2024-12-06 19:26:05.365245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.571 [2024-12-06 19:26:05.372376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.571 [2024-12-06 19:26:05.372403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.571 [2024-12-06 19:26:05.372434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.571 [2024-12-06 19:26:05.379127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.571 [2024-12-06 19:26:05.379154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.571 [2024-12-06 19:26:05.379184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.571 [2024-12-06 19:26:05.385843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.571 [2024-12-06 19:26:05.385871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.571 [2024-12-06 19:26:05.385887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.571 [2024-12-06 19:26:05.392870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.571 [2024-12-06 19:26:05.392899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.392915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.399747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.399776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.399793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.406838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.406866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.406882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.413902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.413936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.413952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.422086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.422114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.422144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.429341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.429369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.429399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.436847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.436875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.436891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.444393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.444421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.444459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.452339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.452367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.452397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.459791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.459820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.459836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.467332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.467360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.467392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.474488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.474517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.474548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.481842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.481870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.481888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.488471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.488498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.488528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.495856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.495886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.495903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.502988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.503017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.503033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.511013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.511041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.511056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.518311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.518339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.518369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.525620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.525648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.525678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.533013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.533055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.533071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.540305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.540339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.540370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.549489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.549517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.549549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.558798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.558828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.558844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.567492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.567520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.567552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.577141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.577169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.577201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.586674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.586717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.586743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.572 [2024-12-06 19:26:05.595437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.572 [2024-12-06 19:26:05.595465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.572 [2024-12-06 19:26:05.595495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.573 [2024-12-06 19:26:05.604779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.573 [2024-12-06 19:26:05.604808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.573 [2024-12-06 19:26:05.604825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.573 [2024-12-06 19:26:05.612813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.573 [2024-12-06 19:26:05.612844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.573 [2024-12-06 19:26:05.612861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.621485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.621517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.621548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.630379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.630408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.630439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.638852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.638881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.638898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.647981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.648009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.648025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.656917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.656945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.656962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.666213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.666256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.666273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.670353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.670379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.670410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.677758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.677786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.677803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.686118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.686146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.686186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.693805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.693835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.693851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.701372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.701400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.701430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.709062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.709090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.709121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.715852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.715881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.715897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.723312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.723339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.723370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.730325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.730352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.730383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.737819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.737847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.737862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.745222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.745249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.745279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.752624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.752655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.752685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.760097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.760139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.760154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.767521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.767561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.767577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.775045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.775072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.833 [2024-12-06 19:26:05.775086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.833 [2024-12-06 19:26:05.782566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.833 [2024-12-06 19:26:05.782593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.834 [2024-12-06 19:26:05.782623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.834 [2024-12-06 19:26:05.790155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.834 [2024-12-06 19:26:05.790183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.834 [2024-12-06 19:26:05.790213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.834 [2024-12-06 19:26:05.797653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.834 [2024-12-06 19:26:05.797681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.834 [2024-12-06 19:26:05.797712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.834 [2024-12-06 19:26:05.805278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.834 [2024-12-06 19:26:05.805305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.834 [2024-12-06 19:26:05.805334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.834 [2024-12-06 19:26:05.812770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.834 [2024-12-06 19:26:05.812796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.834 [2024-12-06 19:26:05.812812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.834 [2024-12-06 19:26:05.820154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.834 [2024-12-06 19:26:05.820181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.834 [2024-12-06 19:26:05.820211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.834 [2024-12-06 19:26:05.827605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.834 [2024-12-06 19:26:05.827632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.834 [2024-12-06 19:26:05.827661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.834 [2024-12-06 19:26:05.835100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.834 [2024-12-06 19:26:05.835126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.834 [2024-12-06 19:26:05.835158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.834 [2024-12-06 19:26:05.842554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.834 [2024-12-06 19:26:05.842581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.834 [2024-12-06 19:26:05.842610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.834 [2024-12-06 19:26:05.849896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.834 [2024-12-06 19:26:05.849925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.834 [2024-12-06 19:26:05.849941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:20.834 [2024-12-06 19:26:05.857431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.834 [2024-12-06 19:26:05.857458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.834 [2024-12-06 19:26:05.857488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:20.834 [2024-12-06 19:26:05.864567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.834 [2024-12-06 19:26:05.864594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.834 [2024-12-06 19:26:05.864625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:20.834 [2024-12-06 19:26:05.871664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.834 [2024-12-06 19:26:05.871691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.834 [2024-12-06 19:26:05.871728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:20.834 [2024-12-06 19:26:05.878752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:20.834 [2024-12-06 19:26:05.878781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.834 [2024-12-06 19:26:05.878802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:21.092 [2024-12-06 19:26:05.886058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:21.092 [2024-12-06 19:26:05.886103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-12-06 19:26:05.886120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:21.092 [2024-12-06 19:26:05.893482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:21.092 [2024-12-06 19:26:05.893509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-12-06 19:26:05.893539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:21.092 [2024-12-06 19:26:05.900621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:21.092 [2024-12-06 19:26:05.900662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-12-06 19:26:05.900680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:21.092 4352.50 IOPS, 544.06 MiB/s [2024-12-06T18:26:06.141Z] [2024-12-06 19:26:05.909282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2511620) 00:27:21.092 [2024-12-06 19:26:05.909309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.092 [2024-12-06 19:26:05.909339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:21.092 00:27:21.092 Latency(us) 00:27:21.092 [2024-12-06T18:26:06.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.092 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:21.092 nvme0n1 : 2.00 4354.28 544.28 0.00 0.00 3669.87 776.72 15146.10 00:27:21.092 [2024-12-06T18:26:06.141Z] =================================================================================================================== 00:27:21.092 [2024-12-06T18:26:06.141Z] Total : 4354.28 544.28 0.00 0.00 3669.87 776.72 15146.10 00:27:21.092 { 00:27:21.092 "results": [ 00:27:21.092 { 00:27:21.092 "job": "nvme0n1", 00:27:21.092 "core_mask": "0x2", 00:27:21.092 "workload": "randread", 00:27:21.092 "status": "finished", 00:27:21.092 "queue_depth": 16, 00:27:21.093 "io_size": 131072, 00:27:21.093 "runtime": 2.002857, 00:27:21.093 "iops": 4354.279911146927, 00:27:21.093 "mibps": 544.2849888933658, 00:27:21.093 "io_failed": 0, 00:27:21.093 "io_timeout": 0, 00:27:21.093 "avg_latency_us": 3669.8679318970385, 00:27:21.093 "min_latency_us": 776.7229629629629, 00:27:21.093 "max_latency_us": 15146.097777777777 00:27:21.093 } 00:27:21.093 ], 00:27:21.093 "core_count": 1 00:27:21.093 } 00:27:21.093 19:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:21.093 19:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:21.093 19:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:21.093 19:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:21.093 | .driver_specific 00:27:21.093 | .nvme_error 00:27:21.093 | .status_code 00:27:21.093 | .command_transient_transport_error' 00:27:21.352 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 282 > 0 )) 00:27:21.352 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 322586 00:27:21.352 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 322586 ']' 00:27:21.352 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 322586 00:27:21.352 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:21.352 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:21.352 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 322586 00:27:21.352 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:21.352 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:21.352 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 322586' 00:27:21.352 killing process with pid 322586 00:27:21.352 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 322586 00:27:21.352 Received shutdown signal, test time was about 2.000000 seconds 00:27:21.352 00:27:21.352 Latency(us) 00:27:21.352 [2024-12-06T18:26:06.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.352 [2024-12-06T18:26:06.401Z] =================================================================================================================== 00:27:21.352 [2024-12-06T18:26:06.401Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:21.352 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 322586 00:27:21.610 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:21.610 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:21.610 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:21.610 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:21.610 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:21.610 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=323012 00:27:21.610 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:21.610 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 323012 /var/tmp/bperf.sock 00:27:21.610 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 323012 ']' 00:27:21.610 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:21.610 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:21.610 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:21.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:21.610 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:21.610 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:21.610 [2024-12-06 19:26:06.512941] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:27:21.610 [2024-12-06 19:26:06.513025] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323012 ] 00:27:21.610 [2024-12-06 19:26:06.578909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.610 [2024-12-06 19:26:06.636586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.868 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:21.868 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:21.868 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:21.868 19:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:22.126 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:22.126 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.126 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:22.126 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.126 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:22.126 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:22.383 nvme0n1 00:27:22.383 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:22.383 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.383 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:22.383 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.383 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:22.383 19:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:22.642 Running I/O for 2 seconds... 00:27:22.642 [2024-12-06 19:26:07.548148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eebb98 00:27:22.642 [2024-12-06 19:26:07.549209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.642 [2024-12-06 19:26:07.549247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:22.642 [2024-12-06 19:26:07.558602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef81e0 00:27:22.642 [2024-12-06 19:26:07.559604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.642 [2024-12-06 19:26:07.559630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:22.642 [2024-12-06 19:26:07.570896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efdeb0 00:27:22.642 [2024-12-06 19:26:07.572088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.642 [2024-12-06 19:26:07.572115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:22.642 [2024-12-06 19:26:07.581167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee0ea0 00:27:22.642 [2024-12-06 19:26:07.582329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.642 [2024-12-06 19:26:07.582364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:22.642 [2024-12-06 19:26:07.593449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eed4e8 00:27:22.642 [2024-12-06 19:26:07.594753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.642 [2024-12-06 19:26:07.594780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:22.642 [2024-12-06 19:26:07.604787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efcdd0 00:27:22.642 [2024-12-06 19:26:07.606215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.642 [2024-12-06 19:26:07.606241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:22.642 [2024-12-06 19:26:07.615040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efc998 00:27:22.642 [2024-12-06 19:26:07.616370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.642 [2024-12-06 19:26:07.616395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:22.642 [2024-12-06 19:26:07.625328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee4de8 00:27:22.642 [2024-12-06 19:26:07.626472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.642 [2024-12-06 19:26:07.626498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:22.642 [2024-12-06 19:26:07.636220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efbcf0 00:27:22.642 [2024-12-06 19:26:07.636970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.642 [2024-12-06 19:26:07.636996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:22.642 [2024-12-06 19:26:07.648802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eebb98 00:27:22.642 [2024-12-06 19:26:07.650353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.642 [2024-12-06 19:26:07.650379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:22.642 [2024-12-06 19:26:07.659052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef96f8 00:27:22.642 [2024-12-06 19:26:07.660238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.642 [2024-12-06 19:26:07.660263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:22.642 [2024-12-06 19:26:07.670247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016edf118 00:27:22.642 [2024-12-06 19:26:07.671329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.642 [2024-12-06 19:26:07.671354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:22.642 [2024-12-06 19:26:07.681415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef1430 00:27:22.642 [2024-12-06 19:26:07.682674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.642 [2024-12-06 19:26:07.682699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:22.903 [2024-12-06 19:26:07.691545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efe2e8 00:27:22.904 [2024-12-06 19:26:07.693208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.693234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.701690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee3d08 00:27:22.904 [2024-12-06 19:26:07.702570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.702596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.714014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee4de8 00:27:22.904 [2024-12-06 19:26:07.715436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.715462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.725153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eef6a8 00:27:22.904 [2024-12-06 19:26:07.726557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.726583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.736380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee84c0 00:27:22.904 [2024-12-06 19:26:07.737774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.737803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.745336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eeff18 00:27:22.904 [2024-12-06 19:26:07.746227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.746253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.756651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee88f8 00:27:22.904 [2024-12-06 19:26:07.757665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.757692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.766861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef4298 00:27:22.904 [2024-12-06 19:26:07.767662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.767687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.777389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef0788 00:27:22.904 [2024-12-06 19:26:07.778213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.778239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.789570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee49b0 00:27:22.904 [2024-12-06 19:26:07.790565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.790592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.800877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efcdd0 00:27:22.904 [2024-12-06 19:26:07.802058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.802100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.811872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef0350 00:27:22.904 [2024-12-06 19:26:07.812999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.813041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.824248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee38d0 00:27:22.904 [2024-12-06 19:26:07.825548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.825575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.835268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efc560 00:27:22.904 [2024-12-06 19:26:07.836532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.836559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.847545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee7818 00:27:22.904 [2024-12-06 19:26:07.849391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.849418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.855295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef7970 00:27:22.904 [2024-12-06 19:26:07.856119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.856145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.867882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee95a0 00:27:22.904 [2024-12-06 19:26:07.868876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.868909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.877773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee23b8 00:27:22.904 [2024-12-06 19:26:07.878906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.878933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.890652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016edf118 00:27:22.904 [2024-12-06 19:26:07.892336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.892362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.900945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef7100 00:27:22.904 [2024-12-06 19:26:07.902222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.902248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.910908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee6738 00:27:22.904 [2024-12-06 19:26:07.912455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.912482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.921071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef2948 00:27:22.904 [2024-12-06 19:26:07.921884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.921911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.932052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efc998 00:27:22.904 [2024-12-06 19:26:07.932885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.932912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:22.904 [2024-12-06 19:26:07.943334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016edf988 00:27:22.904 [2024-12-06 19:26:07.944294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.904 [2024-12-06 19:26:07.944320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:07.954246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eec408 00:27:23.166 [2024-12-06 19:26:07.955343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:07.955371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:07.966504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef1ca0 00:27:23.166 [2024-12-06 19:26:07.967780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:07.967808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:07.976676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef6458 00:27:23.166 [2024-12-06 19:26:07.977938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:07.977965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:07.986857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee12d8 00:27:23.166 [2024-12-06 19:26:07.987673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:07.987700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:07.997991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee9e10 00:27:23.166 [2024-12-06 19:26:07.998638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:07.998666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:08.010547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efac10 00:27:23.166 [2024-12-06 19:26:08.012088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:08.012114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:08.020599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee5220 00:27:23.166 [2024-12-06 19:26:08.021750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:08.021778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:08.031556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eeb760 00:27:23.166 [2024-12-06 19:26:08.032650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:08.032676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:08.042795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef3e60 00:27:23.166 [2024-12-06 19:26:08.044045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:08.044073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:08.053055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee5ec8 00:27:23.166 [2024-12-06 19:26:08.054420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:08.054448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:08.064587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eef6a8 00:27:23.166 [2024-12-06 19:26:08.065897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:08.065927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:08.076143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee84c0 00:27:23.166 [2024-12-06 19:26:08.077372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:08.077399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:08.086496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee88f8 00:27:23.166 [2024-12-06 19:26:08.087583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:08.087610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:08.097852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eebb98 00:27:23.166 [2024-12-06 19:26:08.099010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:08.099037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:08.109297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efeb58 00:27:23.166 [2024-12-06 19:26:08.110689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:08.110716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:08.119479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee6fa8 00:27:23.166 [2024-12-06 19:26:08.120433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:08.120460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:08.130584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eee190 00:27:23.166 [2024-12-06 19:26:08.131435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:08.131461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:08.143628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef9b30 00:27:23.166 [2024-12-06 19:26:08.145466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:08.145492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:08.151443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efac10 00:27:23.166 [2024-12-06 19:26:08.152412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:08.152442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:08.164623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef7100 00:27:23.166 [2024-12-06 19:26:08.166025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:08.166066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:08.174988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef1ca0 00:27:23.166 [2024-12-06 19:26:08.176412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.166 [2024-12-06 19:26:08.176438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:23.166 [2024-12-06 19:26:08.185149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef35f0 00:27:23.166 [2024-12-06 19:26:08.186111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.167 [2024-12-06 19:26:08.186137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:23.167 [2024-12-06 19:26:08.196301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef8e88 00:27:23.167 [2024-12-06 19:26:08.197153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.167 [2024-12-06 19:26:08.197179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:23.167 [2024-12-06 19:26:08.208856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef9b30 00:27:23.167 [2024-12-06 19:26:08.210525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.167 [2024-12-06 19:26:08.210558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:23.427 [2024-12-06 19:26:08.219087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef1430 00:27:23.427 [2024-12-06 19:26:08.220376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.427 [2024-12-06 19:26:08.220404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:23.427 [2024-12-06 19:26:08.229093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eeaef0 00:27:23.427 [2024-12-06 19:26:08.230627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.427 [2024-12-06 19:26:08.230654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:23.427 [2024-12-06 19:26:08.240206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee1f80 00:27:23.427 [2024-12-06 19:26:08.241378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.427 [2024-12-06 19:26:08.241404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:23.427 [2024-12-06 19:26:08.252813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee8088 00:27:23.427 [2024-12-06 19:26:08.254507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.427 [2024-12-06 19:26:08.254534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:23.427 [2024-12-06 19:26:08.261936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eeb760 00:27:23.427 [2024-12-06 19:26:08.262771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.427 [2024-12-06 19:26:08.262799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:23.427 [2024-12-06 19:26:08.273052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee1710 00:27:23.427 [2024-12-06 19:26:08.274217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.427 [2024-12-06 19:26:08.274243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:23.427 [2024-12-06 19:26:08.283277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee0a68 00:27:23.427 [2024-12-06 19:26:08.284379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.427 [2024-12-06 19:26:08.284404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:23.427 [2024-12-06 19:26:08.295101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef0bc0 00:27:23.427 [2024-12-06 19:26:08.296369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.427 [2024-12-06 19:26:08.296394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:23.427 [2024-12-06 19:26:08.306484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee6fa8 00:27:23.427 [2024-12-06 19:26:08.307422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.427 [2024-12-06 19:26:08.307458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:23.427 [2024-12-06 19:26:08.320500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef6458 00:27:23.427 [2024-12-06 19:26:08.322547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.427 [2024-12-06 19:26:08.322574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:23.427 [2024-12-06 19:26:08.328821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eed920 00:27:23.427 [2024-12-06 19:26:08.329718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.427 [2024-12-06 19:26:08.329763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:23.428 [2024-12-06 19:26:08.340385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee6fa8 00:27:23.428 [2024-12-06 19:26:08.341444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.428 [2024-12-06 19:26:08.341471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:23.428 [2024-12-06 19:26:08.352225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016edf550 00:27:23.428 [2024-12-06 19:26:08.353399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.428 [2024-12-06 19:26:08.353426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:23.428 [2024-12-06 19:26:08.363752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee7c50 00:27:23.428 [2024-12-06 19:26:08.364900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.428 [2024-12-06 19:26:08.364928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:23.428 [2024-12-06 19:26:08.377253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efd640 00:27:23.428 [2024-12-06 19:26:08.379044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.428 [2024-12-06 19:26:08.379100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:23.428 [2024-12-06 19:26:08.389460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efac10 00:27:23.428 [2024-12-06 19:26:08.391440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.428 [2024-12-06 19:26:08.391466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:23.428 [2024-12-06 19:26:08.397662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eeea00 00:27:23.428 [2024-12-06 19:26:08.398695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.428 [2024-12-06 19:26:08.398745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:23.428 [2024-12-06 19:26:08.409162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efa3a0 00:27:23.428 [2024-12-06 19:26:08.410189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.428 [2024-12-06 19:26:08.410214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:23.428 [2024-12-06 19:26:08.420846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef7538 00:27:23.428 [2024-12-06 19:26:08.421871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.428 [2024-12-06 19:26:08.421899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:23.428 [2024-12-06 19:26:08.432778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee0ea0 00:27:23.428 [2024-12-06 19:26:08.434119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.428 [2024-12-06 19:26:08.434149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:23.428 [2024-12-06 19:26:08.444569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eed920 00:27:23.428 [2024-12-06 19:26:08.446051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.428 [2024-12-06 19:26:08.446099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:23.428 [2024-12-06 19:26:08.456395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efe2e8 00:27:23.428 [2024-12-06 19:26:08.458056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.428 [2024-12-06 19:26:08.458082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:23.428 [2024-12-06 19:26:08.467048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef81e0 00:27:23.428 [2024-12-06 19:26:08.468310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.428 [2024-12-06 19:26:08.468336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:23.689 [2024-12-06 19:26:08.478303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef0ff8 00:27:23.689 [2024-12-06 19:26:08.479702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.689 [2024-12-06 19:26:08.479754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:23.689 [2024-12-06 19:26:08.489774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eeee38 00:27:23.689 [2024-12-06 19:26:08.490621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.689 [2024-12-06 19:26:08.490649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:23.689 [2024-12-06 19:26:08.500719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee0630 00:27:23.689 [2024-12-06 19:26:08.501946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.689 [2024-12-06 19:26:08.501973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:23.689 [2024-12-06 19:26:08.512010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efdeb0 00:27:23.689 [2024-12-06 19:26:08.513251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.689 [2024-12-06 19:26:08.513276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:23.689 [2024-12-06 19:26:08.523383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efd640 00:27:23.689 [2024-12-06 19:26:08.524186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.689 [2024-12-06 19:26:08.524213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:23.689 [2024-12-06 19:26:08.536804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef96f8 00:27:23.689 22929.00 IOPS, 89.57 MiB/s [2024-12-06T18:26:08.738Z] [2024-12-06 19:26:08.538644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.689 [2024-12-06 19:26:08.538670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:23.689 [2024-12-06 19:26:08.544938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee4de8 00:27:23.689 [2024-12-06 19:26:08.545897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.690 [2024-12-06 19:26:08.545923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:23.690 [2024-12-06 19:26:08.556897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef8e88 00:27:23.690 [2024-12-06 19:26:08.558156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.690 [2024-12-06 19:26:08.558185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:23.690 [2024-12-06 19:26:08.569181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016edf118 00:27:23.690 [2024-12-06 19:26:08.570445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.690 [2024-12-06 19:26:08.570472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:23.690 [2024-12-06 19:26:08.581276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ede470 00:27:23.690 [2024-12-06 19:26:08.582642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.690 [2024-12-06 19:26:08.582669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:23.690 [2024-12-06 19:26:08.593135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee4de8 00:27:23.690 [2024-12-06 19:26:08.594638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.690 [2024-12-06 19:26:08.594665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:23.690 [2024-12-06 19:26:08.604803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efdeb0 00:27:23.690 [2024-12-06 19:26:08.606465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.690 [2024-12-06 19:26:08.606492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:23.690 [2024-12-06 19:26:08.612800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef9f68 00:27:23.690 [2024-12-06 19:26:08.613555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.690 [2024-12-06 19:26:08.613581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:23.690 [2024-12-06 19:26:08.626317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ede470 00:27:23.690 [2024-12-06 19:26:08.627534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.690 [2024-12-06 19:26:08.627563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:23.690 [2024-12-06 19:26:08.636973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef6020 00:27:23.690 [2024-12-06 19:26:08.638224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.690 [2024-12-06 19:26:08.638252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:23.690 [2024-12-06 19:26:08.648387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee6738 00:27:23.690 [2024-12-06 19:26:08.649647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.690 [2024-12-06 19:26:08.649685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:23.690 [2024-12-06 19:26:08.658915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee5ec8 00:27:23.690 [2024-12-06 19:26:08.659718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.690 [2024-12-06 19:26:08.659753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:23.690 [2024-12-06 19:26:08.670382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eeb328 00:27:23.690 [2024-12-06 19:26:08.671062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.690 [2024-12-06 19:26:08.671089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:23.690 [2024-12-06 19:26:08.684466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ede8a8 00:27:23.690 [2024-12-06 19:26:08.686337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.690 [2024-12-06 19:26:08.686364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:23.690 [2024-12-06 19:26:08.692682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee99d8 00:27:23.690 [2024-12-06 19:26:08.693675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.690 [2024-12-06 19:26:08.693717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:23.690 [2024-12-06 19:26:08.706616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee3498 00:27:23.690 [2024-12-06 19:26:08.708245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.690 [2024-12-06 19:26:08.708272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:23.690 [2024-12-06 19:26:08.717144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eeff18 00:27:23.690 [2024-12-06 19:26:08.718282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.690 [2024-12-06 19:26:08.718309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:23.690 [2024-12-06 19:26:08.727755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee73e0 00:27:23.690 [2024-12-06 19:26:08.728877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.690 [2024-12-06 19:26:08.728905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.951 [2024-12-06 19:26:08.738959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef2510 00:27:23.951 [2024-12-06 19:26:08.739980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.951 [2024-12-06 19:26:08.740034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:23.951 [2024-12-06 19:26:08.749442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efc998 00:27:23.951 [2024-12-06 19:26:08.750296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.750324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.762154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eed920 00:27:23.952 [2024-12-06 19:26:08.763451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.763480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.775245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef8e88 00:27:23.952 [2024-12-06 19:26:08.776919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.776947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.784626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef6020 00:27:23.952 [2024-12-06 19:26:08.785543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.785571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.798116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee4de8 00:27:23.952 [2024-12-06 19:26:08.800000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.800042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.806090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef92c0 00:27:23.952 [2024-12-06 19:26:08.806952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.806980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.817665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eeee38 00:27:23.952 [2024-12-06 19:26:08.818713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.818750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.830475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ede038 00:27:23.952 [2024-12-06 19:26:08.831658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.831686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.842147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef5be8 00:27:23.952 [2024-12-06 19:26:08.843405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.843433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.852807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef7538 00:27:23.952 [2024-12-06 19:26:08.854107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.854135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.864679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efe2e8 00:27:23.952 [2024-12-06 19:26:08.866136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.866164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.876526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef0788 00:27:23.952 [2024-12-06 19:26:08.878109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.878137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.888311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eeff18 00:27:23.952 [2024-12-06 19:26:08.890068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.890096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.896518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee99d8 00:27:23.952 [2024-12-06 19:26:08.897382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.897409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.909243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef96f8 00:27:23.952 [2024-12-06 19:26:08.910261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.910288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.920840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eff3c8 00:27:23.952 [2024-12-06 19:26:08.921995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.922039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.931689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef6020 00:27:23.952 [2024-12-06 19:26:08.932853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.932882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.944347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef2d80 00:27:23.952 [2024-12-06 19:26:08.945645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.945674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.955939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee27f0 00:27:23.952 [2024-12-06 19:26:08.957399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.957426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.966616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee23b8 00:27:23.952 [2024-12-06 19:26:08.968108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.968135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.978129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eebb98 00:27:23.952 [2024-12-06 19:26:08.979547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.979574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:23.952 [2024-12-06 19:26:08.989794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef0788 00:27:23.952 [2024-12-06 19:26:08.991238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:23.952 [2024-12-06 19:26:08.991265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.000493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee73e0 00:27:24.213 [2024-12-06 19:26:09.001946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.001976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.011101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eddc00 00:27:24.213 [2024-12-06 19:26:09.012124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.012153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.022535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef35f0 00:27:24.213 [2024-12-06 19:26:09.023404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.023431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.034412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efcdd0 00:27:24.213 [2024-12-06 19:26:09.035420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.035454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.047331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee6b70 00:27:24.213 [2024-12-06 19:26:09.049267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.049294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.055373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef6020 00:27:24.213 [2024-12-06 19:26:09.056229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.056255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.068322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee0a68 00:27:24.213 [2024-12-06 19:26:09.069442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.069469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.079002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef0ff8 00:27:24.213 [2024-12-06 19:26:09.080225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.080254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.091208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eefae0 00:27:24.213 [2024-12-06 19:26:09.092340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.092367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.101498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efb8b8 00:27:24.213 [2024-12-06 19:26:09.102645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.102671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.114179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef57b0 00:27:24.213 [2024-12-06 19:26:09.115504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.115532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.125895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eefae0 00:27:24.213 [2024-12-06 19:26:09.127338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.127365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.136585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef2d80 00:27:24.213 [2024-12-06 19:26:09.138037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.138064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.147083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef8618 00:27:24.213 [2024-12-06 19:26:09.148080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.148108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.158200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efe2e8 00:27:24.213 [2024-12-06 19:26:09.159157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.159184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.168651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ede038 00:27:24.213 [2024-12-06 19:26:09.169620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.169647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.181333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee5a90 00:27:24.213 [2024-12-06 19:26:09.182481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.182508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.193076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efe2e8 00:27:24.213 [2024-12-06 19:26:09.194355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.194382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.204424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efeb58 00:27:24.213 [2024-12-06 19:26:09.205851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.205879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.214952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ede8a8 00:27:24.213 [2024-12-06 19:26:09.215939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.215967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.226444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016edf550 00:27:24.213 [2024-12-06 19:26:09.227297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.213 [2024-12-06 19:26:09.227323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:24.213 [2024-12-06 19:26:09.239291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eedd58 00:27:24.214 [2024-12-06 19:26:09.240959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.214 [2024-12-06 19:26:09.240986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:24.214 [2024-12-06 19:26:09.249462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee7c50 00:27:24.214 [2024-12-06 19:26:09.250692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.214 [2024-12-06 19:26:09.250741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:24.214 [2024-12-06 19:26:09.259478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef0ff8 00:27:24.214 [2024-12-06 19:26:09.261071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.214 [2024-12-06 19:26:09.261100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.474 [2024-12-06 19:26:09.269578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efcdd0 00:27:24.474 [2024-12-06 19:26:09.270427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.474 [2024-12-06 19:26:09.270455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:24.474 [2024-12-06 19:26:09.280542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eefae0 00:27:24.474 [2024-12-06 19:26:09.281373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.474 [2024-12-06 19:26:09.281400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:24.474 [2024-12-06 19:26:09.291891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eec408 00:27:24.474 [2024-12-06 19:26:09.292789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.474 [2024-12-06 19:26:09.292817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:24.474 [2024-12-06 19:26:09.302010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef35f0 00:27:24.474 [2024-12-06 19:26:09.302798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.474 [2024-12-06 19:26:09.302825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:24.474 [2024-12-06 19:26:09.313348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efda78 00:27:24.474 [2024-12-06 19:26:09.314162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.474 [2024-12-06 19:26:09.314188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:24.474 [2024-12-06 19:26:09.324788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eeee38 00:27:24.474 [2024-12-06 19:26:09.325810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.474 [2024-12-06 19:26:09.325849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:24.474 [2024-12-06 19:26:09.338143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee5220 00:27:24.474 [2024-12-06 19:26:09.339659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.474 [2024-12-06 19:26:09.339686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:24.474 [2024-12-06 19:26:09.348446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eea680 00:27:24.474 [2024-12-06 19:26:09.349515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.474 [2024-12-06 19:26:09.349541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:24.474 [2024-12-06 19:26:09.358466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eeee38 00:27:24.474 [2024-12-06 19:26:09.359548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.474 [2024-12-06 19:26:09.359573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:24.474 [2024-12-06 19:26:09.370715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eeff18 00:27:24.474 [2024-12-06 19:26:09.371985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.474 [2024-12-06 19:26:09.372011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:24.474 [2024-12-06 19:26:09.381783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee8088 00:27:24.474 [2024-12-06 19:26:09.383118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.474 [2024-12-06 19:26:09.383144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:24.474 [2024-12-06 19:26:09.392401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee5658 00:27:24.474 [2024-12-06 19:26:09.393646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.474 [2024-12-06 19:26:09.393672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:24.474 [2024-12-06 19:26:09.405971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efb480 00:27:24.474 [2024-12-06 19:26:09.407765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.475 [2024-12-06 19:26:09.407794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.475 [2024-12-06 19:26:09.413707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eeb760 00:27:24.475 [2024-12-06 19:26:09.414528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.475 [2024-12-06 19:26:09.414554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:24.475 [2024-12-06 19:26:09.424148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eedd58 00:27:24.475 [2024-12-06 19:26:09.424943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.475 [2024-12-06 19:26:09.424969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:24.475 [2024-12-06 19:26:09.435230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efdeb0 00:27:24.475 [2024-12-06 19:26:09.436038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.475 [2024-12-06 19:26:09.436064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:24.475 [2024-12-06 19:26:09.445962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016efb8b8 00:27:24.475 [2024-12-06 19:26:09.446749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.475 [2024-12-06 19:26:09.446792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:24.475 [2024-12-06 19:26:09.459264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef0788 00:27:24.475 [2024-12-06 19:26:09.460483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.475 [2024-12-06 19:26:09.460510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:24.475 [2024-12-06 19:26:09.469446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee01f8 00:27:24.475 [2024-12-06 19:26:09.470524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.475 [2024-12-06 19:26:09.470550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:24.475 [2024-12-06 19:26:09.480024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eef270 00:27:24.475 [2024-12-06 19:26:09.481074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.475 [2024-12-06 19:26:09.481101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:24.475 [2024-12-06 19:26:09.491145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef6cc8 00:27:24.475 [2024-12-06 19:26:09.492216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.475 [2024-12-06 19:26:09.492243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:24.475 [2024-12-06 19:26:09.502489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016eeee38 00:27:24.475 [2024-12-06 19:26:09.503540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.475 [2024-12-06 19:26:09.503567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:24.475 [2024-12-06 19:26:09.513777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ef6458 00:27:24.475 [2024-12-06 19:26:09.515168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.475 [2024-12-06 19:26:09.515194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:24.734 [2024-12-06 19:26:09.524882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee0630 00:27:24.734 [2024-12-06 19:26:09.525740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.734 [2024-12-06 19:26:09.525769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:24.734 [2024-12-06 19:26:09.535314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950b20) with pdu=0x200016ee6738 00:27:24.734 [2024-12-06 19:26:09.536926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:24.734 [2024-12-06 19:26:09.536953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:24.734 22844.50 IOPS, 89.24 MiB/s 00:27:24.734 Latency(us) 00:27:24.734 [2024-12-06T18:26:09.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.734 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:24.734 nvme0n1 : 2.01 22850.27 89.26 0.00 0.00 5593.24 2742.80 15631.55 00:27:24.734 [2024-12-06T18:26:09.783Z] =================================================================================================================== 00:27:24.734 [2024-12-06T18:26:09.783Z] Total : 22850.27 89.26 0.00 0.00 5593.24 2742.80 15631.55 00:27:24.734 { 00:27:24.734 "results": [ 00:27:24.734 { 00:27:24.734 "job": "nvme0n1", 00:27:24.734 "core_mask": "0x2", 00:27:24.734 "workload": "randwrite", 00:27:24.734 "status": "finished", 00:27:24.734 "queue_depth": 128, 00:27:24.734 "io_size": 4096, 00:27:24.734 "runtime": 2.005097, 00:27:24.734 "iops": 22850.26609685217, 00:27:24.734 "mibps": 89.25885194082879, 00:27:24.734 "io_failed": 0, 00:27:24.734 "io_timeout": 0, 00:27:24.734 "avg_latency_us": 5593.238771634982, 00:27:24.734 "min_latency_us": 2742.8029629629627, 00:27:24.734 "max_latency_us": 15631.54962962963 00:27:24.734 } 00:27:24.734 ], 00:27:24.734 "core_count": 1 00:27:24.734 } 00:27:24.734 19:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:24.734 19:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:24.734 19:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:24.734 | .driver_specific 00:27:24.734 | .nvme_error 00:27:24.734 | .status_code 00:27:24.734 | .command_transient_transport_error' 00:27:24.734 19:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:24.995 19:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 179 > 0 )) 00:27:24.995 19:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 323012 00:27:24.995 19:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 323012 ']' 00:27:24.995 19:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 323012 00:27:24.995 19:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:24.995 19:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:24.995 19:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 323012 00:27:24.995 19:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:24.995 19:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:24.995 19:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 323012' 00:27:24.995 killing process with pid 323012 00:27:24.995 19:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 323012 00:27:24.995 Received shutdown signal, test time was about 2.000000 seconds 00:27:24.995 00:27:24.995 Latency(us) 00:27:24.995 [2024-12-06T18:26:10.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.995 [2024-12-06T18:26:10.044Z] =================================================================================================================== 00:27:24.995 [2024-12-06T18:26:10.044Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:24.995 19:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 323012 00:27:25.254 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:25.254 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:25.254 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:25.254 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:25.254 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:25.254 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=323412 00:27:25.254 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:25.254 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 323412 /var/tmp/bperf.sock 00:27:25.254 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 323412 ']' 00:27:25.254 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:25.254 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:25.254 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:25.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:25.254 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:25.254 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:25.254 [2024-12-06 19:26:10.161810] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:27:25.254 [2024-12-06 19:26:10.161899] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323412 ] 00:27:25.254 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:25.254 Zero copy mechanism will not be used. 00:27:25.254 [2024-12-06 19:26:10.232103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.254 [2024-12-06 19:26:10.289470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.512 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:25.512 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:25.512 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:25.512 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:25.771 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:25.771 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.771 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:25.771 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.771 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:25.771 19:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:26.029 nvme0n1 00:27:26.290 19:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:26.290 19:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.290 19:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:26.290 19:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.290 19:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:26.290 19:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:26.290 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:26.290 Zero copy mechanism will not be used. 00:27:26.290 Running I/O for 2 seconds... 00:27:26.290 [2024-12-06 19:26:11.202168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.202303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.202338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.208903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.209024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.209054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.215511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.215735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.215775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.222225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.222377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.222404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.229199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.229406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.229434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.236458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.236586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.236623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.243180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.243267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.243292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.250212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.250307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.250332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.258679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.258842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.258868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.265265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.265366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.265391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.271842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.271923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.271949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.278458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.278557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.278581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.285066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.285164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.285189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.291565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.291653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.291678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.297955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.298056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.298095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.304411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.304508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.304532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.311026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.311122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.311146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.317908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.317986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.318011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.325292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.325412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.325438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.290 [2024-12-06 19:26:11.332492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.290 [2024-12-06 19:26:11.332569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.290 [2024-12-06 19:26:11.332594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.339836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.339924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.339952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.346674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.346810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.346837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.353977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.354070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.354095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.361182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.361300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.361327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.368065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.368155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.368180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.375209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.375307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.375333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.382398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.382479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.382504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.389700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.389807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.389833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.396787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.396880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.396906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.403559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.403687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.403734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.410980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.411075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.411100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.418044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.418140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.418184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.425678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.425784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.425812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.434694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.434968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.434996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.444099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.444179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.444204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.453490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.453635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.453661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.463185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.463302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.463327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.471991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.472214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.472242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.479341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.553 [2024-12-06 19:26:11.479417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.553 [2024-12-06 19:26:11.479442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.553 [2024-12-06 19:26:11.486807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.554 [2024-12-06 19:26:11.486895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.554 [2024-12-06 19:26:11.486922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.554 [2024-12-06 19:26:11.494263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.554 [2024-12-06 19:26:11.494378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.554 [2024-12-06 19:26:11.494402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.554 [2024-12-06 19:26:11.502874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.554 [2024-12-06 19:26:11.502959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.554 [2024-12-06 19:26:11.502986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.554 [2024-12-06 19:26:11.510948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.554 [2024-12-06 19:26:11.511092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.554 [2024-12-06 19:26:11.511117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.554 [2024-12-06 19:26:11.517496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.554 [2024-12-06 19:26:11.517598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.554 [2024-12-06 19:26:11.517623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.554 [2024-12-06 19:26:11.524924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.554 [2024-12-06 19:26:11.525099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.554 [2024-12-06 19:26:11.525124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.554 [2024-12-06 19:26:11.532602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.554 [2024-12-06 19:26:11.532753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.554 [2024-12-06 19:26:11.532780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.554 [2024-12-06 19:26:11.539310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.554 [2024-12-06 19:26:11.539434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.554 [2024-12-06 19:26:11.539459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.554 [2024-12-06 19:26:11.546082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.554 [2024-12-06 19:26:11.546165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.554 [2024-12-06 19:26:11.546189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.554 [2024-12-06 19:26:11.553434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.554 [2024-12-06 19:26:11.553545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.554 [2024-12-06 19:26:11.553570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.554 [2024-12-06 19:26:11.561049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.554 [2024-12-06 19:26:11.561232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.554 [2024-12-06 19:26:11.561256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.554 [2024-12-06 19:26:11.569554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.554 [2024-12-06 19:26:11.569665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.554 [2024-12-06 19:26:11.569690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.554 [2024-12-06 19:26:11.578266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.554 [2024-12-06 19:26:11.578492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.554 [2024-12-06 19:26:11.578520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.554 [2024-12-06 19:26:11.587112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.554 [2024-12-06 19:26:11.587223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.554 [2024-12-06 19:26:11.587248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.554 [2024-12-06 19:26:11.596548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.554 [2024-12-06 19:26:11.596662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.554 [2024-12-06 19:26:11.596688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.814 [2024-12-06 19:26:11.605652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.814 [2024-12-06 19:26:11.605762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.814 [2024-12-06 19:26:11.605790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.814 [2024-12-06 19:26:11.614801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.814 [2024-12-06 19:26:11.615019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.814 [2024-12-06 19:26:11.615063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.814 [2024-12-06 19:26:11.624262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.624393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.624418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.632683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.632796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.632838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.639033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.639126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.639150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.645342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.645428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.645453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.651676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.651787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.651813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.658085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.658163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.658187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.664622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.664733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.664758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.670755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.670829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.670854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.677137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.677219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.677243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.683390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.683478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.683503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.689531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.689617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.689641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.695835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.695918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.695944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.702082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.702162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.702187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.708284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.708377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.708417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.714907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.714990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.715033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.721416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.721625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.721652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.729556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.729682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.729729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.737370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.737503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.737529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.745452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.745584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.745610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.753336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.753514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.753539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.761647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.761845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.761871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.769611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.769837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.769865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.776294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.776437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.815 [2024-12-06 19:26:11.776463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.815 [2024-12-06 19:26:11.782925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.815 [2024-12-06 19:26:11.783081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.816 [2024-12-06 19:26:11.783113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.816 [2024-12-06 19:26:11.789021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.816 [2024-12-06 19:26:11.789152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.816 [2024-12-06 19:26:11.789176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.816 [2024-12-06 19:26:11.795680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.816 [2024-12-06 19:26:11.795819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.816 [2024-12-06 19:26:11.795846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.816 [2024-12-06 19:26:11.802306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.816 [2024-12-06 19:26:11.802443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.816 [2024-12-06 19:26:11.802469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.816 [2024-12-06 19:26:11.808677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.816 [2024-12-06 19:26:11.808841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.816 [2024-12-06 19:26:11.808877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.816 [2024-12-06 19:26:11.815334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.816 [2024-12-06 19:26:11.815421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.816 [2024-12-06 19:26:11.815446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.816 [2024-12-06 19:26:11.820950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.816 [2024-12-06 19:26:11.821047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.816 [2024-12-06 19:26:11.821088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.816 [2024-12-06 19:26:11.826428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.816 [2024-12-06 19:26:11.826529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.816 [2024-12-06 19:26:11.826553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.816 [2024-12-06 19:26:11.831983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.816 [2024-12-06 19:26:11.832130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.816 [2024-12-06 19:26:11.832156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.816 [2024-12-06 19:26:11.837583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.816 [2024-12-06 19:26:11.837692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.816 [2024-12-06 19:26:11.837717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:26.816 [2024-12-06 19:26:11.843183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.816 [2024-12-06 19:26:11.843285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.816 [2024-12-06 19:26:11.843311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:26.816 [2024-12-06 19:26:11.848524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.816 [2024-12-06 19:26:11.848692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.816 [2024-12-06 19:26:11.848744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:26.816 [2024-12-06 19:26:11.854617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.816 [2024-12-06 19:26:11.854785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.816 [2024-12-06 19:26:11.854812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:26.816 [2024-12-06 19:26:11.860730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:26.816 [2024-12-06 19:26:11.860912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.816 [2024-12-06 19:26:11.860939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.866878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.867013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.867041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.873023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.873183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.873210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.879520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.879717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.879752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.885949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.886121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.886148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.892588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.892711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.892748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.898311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.898402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.898426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.903865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.903990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.904016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.909464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.909559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.909588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.915261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.915433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.915458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.921163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.921289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.921315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.927539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.927732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.927758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.933837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.934055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.934081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.940021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.940163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.940189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.946373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.946501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.946527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.952615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.952800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.952828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.958909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.959044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.959071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.965370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.965523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.965555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.971922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.972023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.972050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.977883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.978020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.978046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.984196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.984407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.077 [2024-12-06 19:26:11.984433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.077 [2024-12-06 19:26:11.990398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.077 [2024-12-06 19:26:11.990600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:11.990635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:11.996828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:11.996928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:11.996955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.003607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.003745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.003773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.010163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.010301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.010327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.016291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.016435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.016461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.022399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.022578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.022605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.028297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.028423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.028449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.034079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.034171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.034197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.039874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.039961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.039986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.045855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.045990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.046016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.052021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.052175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.052201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.058262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.058416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.058442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.064520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.064667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.064693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.070785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.070923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.070949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.077617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.077787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.077813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.084871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.084985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.085019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.090874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.090953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.090979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.096836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.096948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.096974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.102911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.103017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.103043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.108620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.108779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.108807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.114920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.115056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.115083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.078 [2024-12-06 19:26:12.121237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.078 [2024-12-06 19:26:12.121370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.078 [2024-12-06 19:26:12.121397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.341 [2024-12-06 19:26:12.127731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.341 [2024-12-06 19:26:12.127846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.341 [2024-12-06 19:26:12.127881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.341 [2024-12-06 19:26:12.133849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.341 [2024-12-06 19:26:12.133993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.341 [2024-12-06 19:26:12.134035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.341 [2024-12-06 19:26:12.140124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.341 [2024-12-06 19:26:12.140364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.341 [2024-12-06 19:26:12.140391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.341 [2024-12-06 19:26:12.146513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.341 [2024-12-06 19:26:12.146670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.341 [2024-12-06 19:26:12.146696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.341 [2024-12-06 19:26:12.153006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.341 [2024-12-06 19:26:12.153131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.341 [2024-12-06 19:26:12.153157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.341 [2024-12-06 19:26:12.159426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.341 [2024-12-06 19:26:12.159584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.341 [2024-12-06 19:26:12.159610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.341 [2024-12-06 19:26:12.165431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.341 [2024-12-06 19:26:12.165534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.341 [2024-12-06 19:26:12.165561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.341 [2024-12-06 19:26:12.171520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.341 [2024-12-06 19:26:12.171634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.341 [2024-12-06 19:26:12.171661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.177863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.177988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.178014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.184445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.184595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.184622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.191011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.191157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.191186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.198538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.198702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.198738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.342 4545.00 IOPS, 568.12 MiB/s [2024-12-06T18:26:12.391Z] [2024-12-06 19:26:12.206684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.206835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.206863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.214968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.215134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.215163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.223522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.223637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.223665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.230971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.231112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.231139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.239036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.239168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.239196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.246872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.246940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.246966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.256067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.256283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.256310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.265401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.265636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.265663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.274671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.274924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.274953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.284147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.284279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.284306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.293267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.293493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.293520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.302436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.302709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.302746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.310765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.310853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.310881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.318285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.318425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.318452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.325834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.325933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.325973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.332410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.332536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.332563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.338771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.338860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.338887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.345148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.345261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.345288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.351801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.351914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.351940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.358205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.358302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.358332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.364673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.364793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.364822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.371010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.371178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.371205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.342 [2024-12-06 19:26:12.377563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.342 [2024-12-06 19:26:12.377668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.342 [2024-12-06 19:26:12.377695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.343 [2024-12-06 19:26:12.384085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.343 [2024-12-06 19:26:12.384199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.343 [2024-12-06 19:26:12.384226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.604 [2024-12-06 19:26:12.391105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.604 [2024-12-06 19:26:12.391198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.604 [2024-12-06 19:26:12.391224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.604 [2024-12-06 19:26:12.398654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.604 [2024-12-06 19:26:12.398768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.604 [2024-12-06 19:26:12.398795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.604 [2024-12-06 19:26:12.405703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.604 [2024-12-06 19:26:12.405820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.604 [2024-12-06 19:26:12.405847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.604 [2024-12-06 19:26:12.412558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.604 [2024-12-06 19:26:12.412671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.604 [2024-12-06 19:26:12.412697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.419069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.419171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.419197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.425478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.425582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.425608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.432082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.432201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.432227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.438687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.438823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.438852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.445002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.445125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.445150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.451752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.451833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.451860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.458672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.458791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.458819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.465962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.466044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.466085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.473857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.473948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.473977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.481033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.481163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.481189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.488388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.488516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.488543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.495491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.495608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.495635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.502296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.502402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.502440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.509610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.509802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.509830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.516872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.516952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.516979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.524090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.524184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.524209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.531356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.531477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.531505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.538491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.538590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.538614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.545994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.546075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.546101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.552685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.552806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.552834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.559586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.559676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.559703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.566041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.566166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.566193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.572539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.572662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.572689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.579043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.579142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.579168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.585645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.585776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.585804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.592642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.592797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.605 [2024-12-06 19:26:12.592825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.605 [2024-12-06 19:26:12.599464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.605 [2024-12-06 19:26:12.599549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.606 [2024-12-06 19:26:12.599574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.606 [2024-12-06 19:26:12.606141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.606 [2024-12-06 19:26:12.606242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.606 [2024-12-06 19:26:12.606267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.606 [2024-12-06 19:26:12.613014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.606 [2024-12-06 19:26:12.613128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.606 [2024-12-06 19:26:12.613154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.606 [2024-12-06 19:26:12.619231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.606 [2024-12-06 19:26:12.619325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.606 [2024-12-06 19:26:12.619350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.606 [2024-12-06 19:26:12.625802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.606 [2024-12-06 19:26:12.625894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.606 [2024-12-06 19:26:12.625920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.606 [2024-12-06 19:26:12.632604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.606 [2024-12-06 19:26:12.632717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.606 [2024-12-06 19:26:12.632768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.606 [2024-12-06 19:26:12.639510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.606 [2024-12-06 19:26:12.639596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.606 [2024-12-06 19:26:12.639621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.606 [2024-12-06 19:26:12.646835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.606 [2024-12-06 19:26:12.646918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.606 [2024-12-06 19:26:12.646945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.654311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.654427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.654456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.661557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.661653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.661679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.668453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.668545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.668571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.675114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.675230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.675257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.681659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.681780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.681819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.688453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.688538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.688564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.694963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.695055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.695081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.701143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.701368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.701395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.707389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.707670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.707704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.713496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.713782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.713810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.719582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.719913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.719941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.726737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.727044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.727072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.734023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.734320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.734348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.740535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.740854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.740883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.747204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.747487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.747514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.753547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.753832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.753860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.760255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.760560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.760588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.766501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.766810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.766837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.772533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.772833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.772861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.778623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.779019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.779063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.785187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.785465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.785492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.792111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.792385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.792412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.798524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.798820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.867 [2024-12-06 19:26:12.798847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.867 [2024-12-06 19:26:12.805366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.867 [2024-12-06 19:26:12.805644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.868 [2024-12-06 19:26:12.805673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.868 [2024-12-06 19:26:12.811910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.868 [2024-12-06 19:26:12.812169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.868 [2024-12-06 19:26:12.812195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.868 [2024-12-06 19:26:12.818222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.868 [2024-12-06 19:26:12.818478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.868 [2024-12-06 19:26:12.818504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.868 [2024-12-06 19:26:12.824958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.868 [2024-12-06 19:26:12.825228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.868 [2024-12-06 19:26:12.825256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.868 [2024-12-06 19:26:12.831730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.868 [2024-12-06 19:26:12.832008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.868 [2024-12-06 19:26:12.832035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.868 [2024-12-06 19:26:12.837951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.868 [2024-12-06 19:26:12.838265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.868 [2024-12-06 19:26:12.838291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.868 [2024-12-06 19:26:12.844775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.868 [2024-12-06 19:26:12.845028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.868 [2024-12-06 19:26:12.845064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.868 [2024-12-06 19:26:12.851375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.868 [2024-12-06 19:26:12.851652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.868 [2024-12-06 19:26:12.851688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.868 [2024-12-06 19:26:12.858418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.868 [2024-12-06 19:26:12.858695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.868 [2024-12-06 19:26:12.858729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.868 [2024-12-06 19:26:12.865492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.868 [2024-12-06 19:26:12.865787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.868 [2024-12-06 19:26:12.865814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.868 [2024-12-06 19:26:12.872156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.868 [2024-12-06 19:26:12.872519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.868 [2024-12-06 19:26:12.872547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.868 [2024-12-06 19:26:12.879556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.868 [2024-12-06 19:26:12.879841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.868 [2024-12-06 19:26:12.879869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.868 [2024-12-06 19:26:12.887269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.868 [2024-12-06 19:26:12.887588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.868 [2024-12-06 19:26:12.887615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.868 [2024-12-06 19:26:12.895656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.868 [2024-12-06 19:26:12.896016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.868 [2024-12-06 19:26:12.896046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.868 [2024-12-06 19:26:12.903393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.868 [2024-12-06 19:26:12.903780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.868 [2024-12-06 19:26:12.903808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.868 [2024-12-06 19:26:12.910515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:27.868 [2024-12-06 19:26:12.910814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.868 [2024-12-06 19:26:12.910842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.130 [2024-12-06 19:26:12.916045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.130 [2024-12-06 19:26:12.916322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.130 [2024-12-06 19:26:12.916351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.130 [2024-12-06 19:26:12.921813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.130 [2024-12-06 19:26:12.922074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.130 [2024-12-06 19:26:12.922102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.130 [2024-12-06 19:26:12.927323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.130 [2024-12-06 19:26:12.927603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.130 [2024-12-06 19:26:12.927631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.130 [2024-12-06 19:26:12.933024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.130 [2024-12-06 19:26:12.933313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.130 [2024-12-06 19:26:12.933340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.130 [2024-12-06 19:26:12.939363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.130 [2024-12-06 19:26:12.939703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.130 [2024-12-06 19:26:12.939751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.130 [2024-12-06 19:26:12.945808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.130 [2024-12-06 19:26:12.946098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.130 [2024-12-06 19:26:12.946127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.130 [2024-12-06 19:26:12.952050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.130 [2024-12-06 19:26:12.952415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.130 [2024-12-06 19:26:12.952442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.130 [2024-12-06 19:26:12.958413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.130 [2024-12-06 19:26:12.958688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.130 [2024-12-06 19:26:12.958715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:12.964077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:12.964344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:12.964371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:12.969756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:12.970056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:12.970085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:12.975695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:12.975992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:12.976036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:12.981465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:12.981753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:12.981781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:12.988593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:12.988974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:12.989030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:12.994595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:12.994909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:12.994939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.000206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.000503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.000530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.005693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.006031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.006060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.011328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.011592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.011619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.017946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.018202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.018238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.024436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.024693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.024743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.030112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.030385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.030413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.035591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.035873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.035901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.041133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.041419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.041446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.046681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.046964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.047001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.052185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.052449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.052476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.057575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.057844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.057871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.063149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.063410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.063437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.068852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.069104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.069131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.074595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.074888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.074917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.080304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.080608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.080634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.086179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.086431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.086458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.091863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.092166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.092192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.097649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.097930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.097957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.103333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.103593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.103619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.109025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.109294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.109320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.114493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.131 [2024-12-06 19:26:13.114771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.131 [2024-12-06 19:26:13.114798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.131 [2024-12-06 19:26:13.120360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.132 [2024-12-06 19:26:13.120643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.132 [2024-12-06 19:26:13.120670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.132 [2024-12-06 19:26:13.126151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.132 [2024-12-06 19:26:13.126406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.132 [2024-12-06 19:26:13.126433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.132 [2024-12-06 19:26:13.132291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.132 [2024-12-06 19:26:13.132546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.132 [2024-12-06 19:26:13.132573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.132 [2024-12-06 19:26:13.138055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.132 [2024-12-06 19:26:13.138337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.132 [2024-12-06 19:26:13.138364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.132 [2024-12-06 19:26:13.144154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.132 [2024-12-06 19:26:13.144414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.132 [2024-12-06 19:26:13.144440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.132 [2024-12-06 19:26:13.150253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.132 [2024-12-06 19:26:13.150515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.132 [2024-12-06 19:26:13.150542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.132 [2024-12-06 19:26:13.156576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.132 [2024-12-06 19:26:13.156844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.132 [2024-12-06 19:26:13.156871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.132 [2024-12-06 19:26:13.162646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.132 [2024-12-06 19:26:13.162952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.132 [2024-12-06 19:26:13.162981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.132 [2024-12-06 19:26:13.168869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.132 [2024-12-06 19:26:13.169116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.132 [2024-12-06 19:26:13.169149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.132 [2024-12-06 19:26:13.174799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.132 [2024-12-06 19:26:13.175057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.132 [2024-12-06 19:26:13.175084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.393 [2024-12-06 19:26:13.180618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.393 [2024-12-06 19:26:13.180904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.393 [2024-12-06 19:26:13.180932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.393 [2024-12-06 19:26:13.187376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.393 [2024-12-06 19:26:13.187648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.393 [2024-12-06 19:26:13.187675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.393 [2024-12-06 19:26:13.193318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.393 [2024-12-06 19:26:13.193571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.393 [2024-12-06 19:26:13.193599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.393 [2024-12-06 19:26:13.198810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.393 [2024-12-06 19:26:13.199060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.393 [2024-12-06 19:26:13.199087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.393 [2024-12-06 19:26:13.204227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1950e60) with pdu=0x200016eff3c8 00:27:28.393 [2024-12-06 19:26:13.204487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.393 [2024-12-06 19:26:13.204514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.393 4606.00 IOPS, 575.75 MiB/s 00:27:28.393 Latency(us) 00:27:28.393 [2024-12-06T18:26:13.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.393 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:28.393 nvme0n1 : 2.00 4607.46 575.93 0.00 0.00 3465.38 2038.90 9757.58 00:27:28.393 [2024-12-06T18:26:13.442Z] =================================================================================================================== 00:27:28.393 [2024-12-06T18:26:13.442Z] Total : 4607.46 575.93 0.00 0.00 3465.38 2038.90 9757.58 00:27:28.393 { 00:27:28.393 "results": [ 00:27:28.393 { 00:27:28.393 "job": "nvme0n1", 00:27:28.393 "core_mask": "0x2", 00:27:28.393 "workload": "randwrite", 00:27:28.393 "status": "finished", 00:27:28.393 "queue_depth": 16, 00:27:28.393 "io_size": 131072, 00:27:28.393 "runtime": 2.004143, 00:27:28.393 "iops": 4607.455655609405, 00:27:28.393 "mibps": 575.9319569511756, 00:27:28.393 "io_failed": 0, 00:27:28.393 "io_timeout": 0, 00:27:28.393 "avg_latency_us": 3465.3824567821016, 00:27:28.393 "min_latency_us": 2038.8977777777777, 00:27:28.393 "max_latency_us": 9757.582222222221 00:27:28.393 } 00:27:28.393 ], 00:27:28.393 "core_count": 1 00:27:28.393 } 00:27:28.393 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:28.393 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:28.393 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:28.393 | .driver_specific 00:27:28.393 | .nvme_error 00:27:28.393 | .status_code 00:27:28.393 | .command_transient_transport_error' 00:27:28.393 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:28.655 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 298 > 0 )) 00:27:28.655 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 323412 00:27:28.655 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 323412 ']' 00:27:28.655 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 323412 00:27:28.655 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:28.655 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.655 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 323412 00:27:28.655 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:28.655 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:28.655 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 323412' 00:27:28.655 killing process with pid 323412 00:27:28.655 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 323412 00:27:28.655 Received shutdown signal, test time was about 2.000000 seconds 00:27:28.655 00:27:28.655 Latency(us) 00:27:28.655 [2024-12-06T18:26:13.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.655 [2024-12-06T18:26:13.704Z] =================================================================================================================== 00:27:28.655 [2024-12-06T18:26:13.704Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:28.655 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 323412 00:27:28.916 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 322048 00:27:28.916 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 322048 ']' 00:27:28.916 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 322048 00:27:28.916 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:28.916 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.916 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 322048 00:27:28.916 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:28.916 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:28.916 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 322048' 00:27:28.916 killing process with pid 322048 00:27:28.916 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 322048 00:27:28.916 19:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 322048 00:27:29.176 00:27:29.176 real 0m15.483s 00:27:29.176 user 0m30.436s 00:27:29.176 sys 0m5.093s 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:29.176 ************************************ 00:27:29.176 END TEST nvmf_digest_error 00:27:29.176 ************************************ 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:29.176 rmmod nvme_tcp 00:27:29.176 rmmod nvme_fabrics 00:27:29.176 rmmod nvme_keyring 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 322048 ']' 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 322048 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 322048 ']' 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 322048 00:27:29.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (322048) - No such process 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 322048 is not found' 00:27:29.176 Process with pid 322048 is not found 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.176 19:26:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:31.710 00:27:31.710 real 0m35.799s 00:27:31.710 user 1m1.880s 00:27:31.710 sys 0m11.969s 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:31.710 ************************************ 00:27:31.710 END TEST nvmf_digest 00:27:31.710 ************************************ 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.710 ************************************ 00:27:31.710 START TEST nvmf_bdevperf 00:27:31.710 ************************************ 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:31.710 * Looking for test storage... 00:27:31.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:31.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.710 --rc genhtml_branch_coverage=1 00:27:31.710 --rc genhtml_function_coverage=1 00:27:31.710 --rc genhtml_legend=1 00:27:31.710 --rc geninfo_all_blocks=1 00:27:31.710 --rc geninfo_unexecuted_blocks=1 00:27:31.710 00:27:31.710 ' 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:31.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.710 --rc genhtml_branch_coverage=1 00:27:31.710 --rc genhtml_function_coverage=1 00:27:31.710 --rc genhtml_legend=1 00:27:31.710 --rc geninfo_all_blocks=1 00:27:31.710 --rc geninfo_unexecuted_blocks=1 00:27:31.710 00:27:31.710 ' 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:31.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.710 --rc genhtml_branch_coverage=1 00:27:31.710 --rc genhtml_function_coverage=1 00:27:31.710 --rc genhtml_legend=1 00:27:31.710 --rc geninfo_all_blocks=1 00:27:31.710 --rc geninfo_unexecuted_blocks=1 00:27:31.710 00:27:31.710 ' 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:31.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.710 --rc genhtml_branch_coverage=1 00:27:31.710 --rc genhtml_function_coverage=1 00:27:31.710 --rc genhtml_legend=1 00:27:31.710 --rc geninfo_all_blocks=1 00:27:31.710 --rc geninfo_unexecuted_blocks=1 00:27:31.710 00:27:31.710 ' 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.710 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:31.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:31.711 19:26:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:33.613 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.613 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:33.614 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:33.614 Found net devices under 0000:84:00.0: cvl_0_0 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:33.614 Found net devices under 0000:84:00.1: cvl_0_1 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:33.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:27:33.614 00:27:33.614 --- 10.0.0.2 ping statistics --- 00:27:33.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.614 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:27:33.614 00:27:33.614 --- 10.0.0.1 ping statistics --- 00:27:33.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.614 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.614 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=325814 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 325814 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 325814 ']' 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.872 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.872 [2024-12-06 19:26:18.736931] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:27:33.872 [2024-12-06 19:26:18.737037] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.872 [2024-12-06 19:26:18.810975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:33.872 [2024-12-06 19:26:18.866939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.872 [2024-12-06 19:26:18.866994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.872 [2024-12-06 19:26:18.867033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.872 [2024-12-06 19:26:18.867046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.872 [2024-12-06 19:26:18.867056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.872 [2024-12-06 19:26:18.868523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.872 [2024-12-06 19:26:18.868643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.872 [2024-12-06 19:26:18.868638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.130 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.130 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:34.130 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:34.130 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:34.130 19:26:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:34.130 [2024-12-06 19:26:19.010033] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:34.130 Malloc0 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:34.130 [2024-12-06 19:26:19.077109] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:34.130 { 00:27:34.130 "params": { 00:27:34.130 "name": "Nvme$subsystem", 00:27:34.130 "trtype": "$TEST_TRANSPORT", 00:27:34.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.130 "adrfam": "ipv4", 00:27:34.130 "trsvcid": "$NVMF_PORT", 00:27:34.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.130 "hdgst": ${hdgst:-false}, 00:27:34.130 "ddgst": ${ddgst:-false} 00:27:34.130 }, 00:27:34.130 "method": "bdev_nvme_attach_controller" 00:27:34.130 } 00:27:34.130 EOF 00:27:34.130 )") 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:34.130 19:26:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:34.130 "params": { 00:27:34.130 "name": "Nvme1", 00:27:34.130 "trtype": "tcp", 00:27:34.130 "traddr": "10.0.0.2", 00:27:34.130 "adrfam": "ipv4", 00:27:34.130 "trsvcid": "4420", 00:27:34.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.130 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:34.130 "hdgst": false, 00:27:34.130 "ddgst": false 00:27:34.130 }, 00:27:34.130 "method": "bdev_nvme_attach_controller" 00:27:34.130 }' 00:27:34.130 [2024-12-06 19:26:19.128236] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:27:34.130 [2024-12-06 19:26:19.128315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325937 ] 00:27:34.388 [2024-12-06 19:26:19.200245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.388 [2024-12-06 19:26:19.261551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.648 Running I/O for 1 seconds... 00:27:35.579 8315.00 IOPS, 32.48 MiB/s 00:27:35.579 Latency(us) 00:27:35.579 [2024-12-06T18:26:20.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.579 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:35.579 Verification LBA range: start 0x0 length 0x4000 00:27:35.579 Nvme1n1 : 1.04 8088.08 31.59 0.00 0.00 15170.78 3228.25 42331.40 00:27:35.579 [2024-12-06T18:26:20.628Z] =================================================================================================================== 00:27:35.579 [2024-12-06T18:26:20.628Z] Total : 8088.08 31.59 0.00 0.00 15170.78 3228.25 42331.40 00:27:35.837 19:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=326082 00:27:35.837 19:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:35.837 19:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:35.837 19:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:35.837 19:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:35.837 19:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:35.837 19:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:35.837 19:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:35.837 { 00:27:35.837 "params": { 00:27:35.837 "name": "Nvme$subsystem", 00:27:35.837 "trtype": "$TEST_TRANSPORT", 00:27:35.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.837 "adrfam": "ipv4", 00:27:35.837 "trsvcid": "$NVMF_PORT", 00:27:35.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.837 "hdgst": ${hdgst:-false}, 00:27:35.837 "ddgst": ${ddgst:-false} 00:27:35.837 }, 00:27:35.837 "method": "bdev_nvme_attach_controller" 00:27:35.837 } 00:27:35.837 EOF 00:27:35.837 )") 00:27:35.837 19:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:35.837 19:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:35.837 19:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:35.837 19:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:35.837 "params": { 00:27:35.837 "name": "Nvme1", 00:27:35.837 "trtype": "tcp", 00:27:35.837 "traddr": "10.0.0.2", 00:27:35.837 "adrfam": "ipv4", 00:27:35.837 "trsvcid": "4420", 00:27:35.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:35.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:35.837 "hdgst": false, 00:27:35.837 "ddgst": false 00:27:35.837 }, 00:27:35.837 "method": "bdev_nvme_attach_controller" 00:27:35.837 }' 00:27:35.837 [2024-12-06 19:26:20.882268] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:27:35.837 [2024-12-06 19:26:20.882368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326082 ] 00:27:36.096 [2024-12-06 19:26:20.953014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.096 [2024-12-06 19:26:21.012492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.356 Running I/O for 15 seconds... 00:27:38.229 8530.00 IOPS, 33.32 MiB/s [2024-12-06T18:26:23.845Z] 8590.50 IOPS, 33.56 MiB/s [2024-12-06T18:26:23.845Z] 19:26:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 325814 00:27:38.796 19:26:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:39.061 [2024-12-06 19:26:23.847531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.847581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.847610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.847627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.847645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.847661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.847675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.847690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.847731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.847749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.847767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.847796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.847815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.847830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.847846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.847863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.847880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.847897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.847912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.847929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.847945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.847960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.847978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.847994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.848027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.848043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.848059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.848088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.848105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.848119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.848137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.848150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.848179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.848193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.848207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.848220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.848235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.848252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.848267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.848280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.848309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.848322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.848336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.848349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.848363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.848376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.848390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.848403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.848416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.848428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.848441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.848454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.848468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.848480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.061 [2024-12-06 19:26:23.848494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.061 [2024-12-06 19:26:23.848506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.848519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.062 [2024-12-06 19:26:23.848532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.848545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.062 [2024-12-06 19:26:23.848558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.848571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.062 [2024-12-06 19:26:23.848584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.848600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.062 [2024-12-06 19:26:23.848614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.848627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.062 [2024-12-06 19:26:23.848640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.848653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.062 [2024-12-06 19:26:23.848666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.848680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.062 [2024-12-06 19:26:23.848694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.848732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.062 [2024-12-06 19:26:23.848749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.848765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.062 [2024-12-06 19:26:23.848780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.848796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.848810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.848826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.848840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.848856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.848870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.848885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.848900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.848915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.848929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.848945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.848959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.848975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.848994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.062 [2024-12-06 19:26:23.849243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.062 [2024-12-06 19:26:23.849268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.062 [2024-12-06 19:26:23.849293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.062 [2024-12-06 19:26:23.849319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.062 [2024-12-06 19:26:23.849601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.062 [2024-12-06 19:26:23.849614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.849627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.849641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.849653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.849670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.849682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.849695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.849731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.849750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.849764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.849780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.849794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.849810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.849824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.849840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.849854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.849871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.849886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.849901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.849915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.849930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.849944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.849960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.849974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.849989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.063 [2024-12-06 19:26:23.850761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.063 [2024-12-06 19:26:23.850778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.850793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.850808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.850822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.850837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.850852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.850867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.850881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.850897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.850911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.850926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.850940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.850956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.850970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.850985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.851000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.851031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.851044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.851058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.851085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.851100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.851113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.851125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.851138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.851151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:39.064 [2024-12-06 19:26:23.851166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.851180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.851192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.851206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.851218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.851231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.851244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.851257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.851269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.851283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.851295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.851308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.851320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.851333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.064 [2024-12-06 19:26:23.851345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.851357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x805550 is same with the state(6) to be set 00:27:39.064 [2024-12-06 19:26:23.851372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:39.064 [2024-12-06 19:26:23.851382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:39.064 [2024-12-06 19:26:23.851392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48336 len:8 PRP1 0x0 PRP2 0x0 00:27:39.064 [2024-12-06 19:26:23.851403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.064 [2024-12-06 19:26:23.854557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.064 [2024-12-06 19:26:23.854623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.064 [2024-12-06 19:26:23.855244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-12-06 19:26:23.855271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.064 [2024-12-06 19:26:23.855285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.064 [2024-12-06 19:26:23.855473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.064 [2024-12-06 19:26:23.855669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.064 [2024-12-06 19:26:23.855687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.064 [2024-12-06 19:26:23.855702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.064 [2024-12-06 19:26:23.855926] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.064 [2024-12-06 19:26:23.868082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.064 [2024-12-06 19:26:23.868421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-12-06 19:26:23.868448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.064 [2024-12-06 19:26:23.868462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.064 [2024-12-06 19:26:23.868648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.064 [2024-12-06 19:26:23.868888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.064 [2024-12-06 19:26:23.868909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.064 [2024-12-06 19:26:23.868922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.064 [2024-12-06 19:26:23.868935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.064 [2024-12-06 19:26:23.881206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.064 [2024-12-06 19:26:23.881565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-12-06 19:26:23.881591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.064 [2024-12-06 19:26:23.881605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.064 [2024-12-06 19:26:23.881836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.064 [2024-12-06 19:26:23.882039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.064 [2024-12-06 19:26:23.882058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.064 [2024-12-06 19:26:23.882071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.064 [2024-12-06 19:26:23.882084] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.064 [2024-12-06 19:26:23.894285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.064 [2024-12-06 19:26:23.894570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-12-06 19:26:23.894595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.064 [2024-12-06 19:26:23.894609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.064 [2024-12-06 19:26:23.894840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.064 [2024-12-06 19:26:23.895043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.064 [2024-12-06 19:26:23.895063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.064 [2024-12-06 19:26:23.895076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.064 [2024-12-06 19:26:23.895093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.064 [2024-12-06 19:26:23.907418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.064 [2024-12-06 19:26:23.907732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.064 [2024-12-06 19:26:23.907758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.064 [2024-12-06 19:26:23.907772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.065 [2024-12-06 19:26:23.907958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.065 [2024-12-06 19:26:23.908148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.065 [2024-12-06 19:26:23.908167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.065 [2024-12-06 19:26:23.908180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.065 [2024-12-06 19:26:23.908192] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.065 [2024-12-06 19:26:23.920549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.065 [2024-12-06 19:26:23.920892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-12-06 19:26:23.920917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.065 [2024-12-06 19:26:23.920931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.065 [2024-12-06 19:26:23.921117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.065 [2024-12-06 19:26:23.921306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.065 [2024-12-06 19:26:23.921326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.065 [2024-12-06 19:26:23.921338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.065 [2024-12-06 19:26:23.921350] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.065 [2024-12-06 19:26:23.933590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.065 [2024-12-06 19:26:23.933929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-12-06 19:26:23.933954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.065 [2024-12-06 19:26:23.933968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.065 [2024-12-06 19:26:23.934154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.065 [2024-12-06 19:26:23.934345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.065 [2024-12-06 19:26:23.934364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.065 [2024-12-06 19:26:23.934376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.065 [2024-12-06 19:26:23.934387] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.065 [2024-12-06 19:26:23.946596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.065 [2024-12-06 19:26:23.946970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-12-06 19:26:23.946996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.065 [2024-12-06 19:26:23.947011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.065 [2024-12-06 19:26:23.947210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.065 [2024-12-06 19:26:23.947401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.065 [2024-12-06 19:26:23.947420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.065 [2024-12-06 19:26:23.947432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.065 [2024-12-06 19:26:23.947444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.065 [2024-12-06 19:26:23.959670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.065 [2024-12-06 19:26:23.960014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-12-06 19:26:23.960054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.065 [2024-12-06 19:26:23.960069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.065 [2024-12-06 19:26:23.960254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.065 [2024-12-06 19:26:23.960444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.065 [2024-12-06 19:26:23.960464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.065 [2024-12-06 19:26:23.960476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.065 [2024-12-06 19:26:23.960488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.065 [2024-12-06 19:26:23.972706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.065 [2024-12-06 19:26:23.973024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-12-06 19:26:23.973049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.065 [2024-12-06 19:26:23.973063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.065 [2024-12-06 19:26:23.973249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.065 [2024-12-06 19:26:23.973440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.065 [2024-12-06 19:26:23.973458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.065 [2024-12-06 19:26:23.973470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.065 [2024-12-06 19:26:23.973482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.065 [2024-12-06 19:26:23.985756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.065 [2024-12-06 19:26:23.986094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-12-06 19:26:23.986120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.065 [2024-12-06 19:26:23.986139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.065 [2024-12-06 19:26:23.986325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.065 [2024-12-06 19:26:23.986515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.065 [2024-12-06 19:26:23.986534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.065 [2024-12-06 19:26:23.986547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.065 [2024-12-06 19:26:23.986559] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.065 [2024-12-06 19:26:23.998842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.065 [2024-12-06 19:26:23.999197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-12-06 19:26:23.999222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.065 [2024-12-06 19:26:23.999235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.065 [2024-12-06 19:26:23.999421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.065 [2024-12-06 19:26:23.999611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.065 [2024-12-06 19:26:23.999630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.065 [2024-12-06 19:26:23.999642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.065 [2024-12-06 19:26:23.999654] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.065 [2024-12-06 19:26:24.011914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.065 [2024-12-06 19:26:24.012261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.065 [2024-12-06 19:26:24.012286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.065 [2024-12-06 19:26:24.012300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.065 [2024-12-06 19:26:24.012485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.065 [2024-12-06 19:26:24.012675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.066 [2024-12-06 19:26:24.012694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.066 [2024-12-06 19:26:24.012706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.066 [2024-12-06 19:26:24.012717] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.066 [2024-12-06 19:26:24.024983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.066 [2024-12-06 19:26:24.025301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-12-06 19:26:24.025326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.066 [2024-12-06 19:26:24.025340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.066 [2024-12-06 19:26:24.025525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.066 [2024-12-06 19:26:24.025715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.066 [2024-12-06 19:26:24.025762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.066 [2024-12-06 19:26:24.025777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.066 [2024-12-06 19:26:24.025790] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.066 [2024-12-06 19:26:24.038009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.066 [2024-12-06 19:26:24.038307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-12-06 19:26:24.038333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.066 [2024-12-06 19:26:24.038347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.066 [2024-12-06 19:26:24.038532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.066 [2024-12-06 19:26:24.038732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.066 [2024-12-06 19:26:24.038766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.066 [2024-12-06 19:26:24.038779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.066 [2024-12-06 19:26:24.038791] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.066 [2024-12-06 19:26:24.051136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.066 [2024-12-06 19:26:24.051477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-12-06 19:26:24.051502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.066 [2024-12-06 19:26:24.051516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.066 [2024-12-06 19:26:24.051701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.066 [2024-12-06 19:26:24.051920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.066 [2024-12-06 19:26:24.051940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.066 [2024-12-06 19:26:24.051953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.066 [2024-12-06 19:26:24.051965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.066 [2024-12-06 19:26:24.064248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.066 [2024-12-06 19:26:24.064579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-12-06 19:26:24.064604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.066 [2024-12-06 19:26:24.064618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.066 [2024-12-06 19:26:24.064834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.066 [2024-12-06 19:26:24.065036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.066 [2024-12-06 19:26:24.065068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.066 [2024-12-06 19:26:24.065081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.066 [2024-12-06 19:26:24.065097] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.066 [2024-12-06 19:26:24.077244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.066 [2024-12-06 19:26:24.077550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-12-06 19:26:24.077575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.066 [2024-12-06 19:26:24.077589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.066 [2024-12-06 19:26:24.077803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.066 [2024-12-06 19:26:24.078001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.066 [2024-12-06 19:26:24.078020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.066 [2024-12-06 19:26:24.078033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.066 [2024-12-06 19:26:24.078060] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.066 [2024-12-06 19:26:24.090262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.066 [2024-12-06 19:26:24.090572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.066 [2024-12-06 19:26:24.090597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.066 [2024-12-06 19:26:24.090611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.066 [2024-12-06 19:26:24.090830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.066 [2024-12-06 19:26:24.091027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.066 [2024-12-06 19:26:24.091061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.066 [2024-12-06 19:26:24.091073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.066 [2024-12-06 19:26:24.091085] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.324 [2024-12-06 19:26:24.103483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.324 [2024-12-06 19:26:24.103894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.324 [2024-12-06 19:26:24.103923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.324 [2024-12-06 19:26:24.103939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.325 [2024-12-06 19:26:24.104162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.325 [2024-12-06 19:26:24.104370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.325 [2024-12-06 19:26:24.104390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.325 [2024-12-06 19:26:24.104404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.325 [2024-12-06 19:26:24.104417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.325 [2024-12-06 19:26:24.117236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.325 [2024-12-06 19:26:24.117597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.325 [2024-12-06 19:26:24.117636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.325 [2024-12-06 19:26:24.117651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.325 [2024-12-06 19:26:24.117888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.325 [2024-12-06 19:26:24.118109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.325 [2024-12-06 19:26:24.118143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.325 [2024-12-06 19:26:24.118155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.325 [2024-12-06 19:26:24.118168] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.325 [2024-12-06 19:26:24.130491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.325 [2024-12-06 19:26:24.130822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.325 [2024-12-06 19:26:24.130849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.325 [2024-12-06 19:26:24.130864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.325 [2024-12-06 19:26:24.131073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.325 [2024-12-06 19:26:24.131263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.325 [2024-12-06 19:26:24.131282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.325 [2024-12-06 19:26:24.131294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.325 [2024-12-06 19:26:24.131305] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.325 [2024-12-06 19:26:24.143599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.325 [2024-12-06 19:26:24.143961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.325 [2024-12-06 19:26:24.143987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.325 [2024-12-06 19:26:24.144001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.325 [2024-12-06 19:26:24.144202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.325 [2024-12-06 19:26:24.144392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.325 [2024-12-06 19:26:24.144411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.325 [2024-12-06 19:26:24.144423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.325 [2024-12-06 19:26:24.144434] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.325 [2024-12-06 19:26:24.156669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.325 [2024-12-06 19:26:24.157025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.325 [2024-12-06 19:26:24.157064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.325 [2024-12-06 19:26:24.157078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.325 [2024-12-06 19:26:24.157268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.325 [2024-12-06 19:26:24.157458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.325 [2024-12-06 19:26:24.157477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.325 [2024-12-06 19:26:24.157489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.325 [2024-12-06 19:26:24.157501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.325 [2024-12-06 19:26:24.169771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.325 [2024-12-06 19:26:24.170117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.325 [2024-12-06 19:26:24.170141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.325 [2024-12-06 19:26:24.170155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.325 [2024-12-06 19:26:24.170341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.325 [2024-12-06 19:26:24.170531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.325 [2024-12-06 19:26:24.170550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.325 [2024-12-06 19:26:24.170562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.325 [2024-12-06 19:26:24.170574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.325 [2024-12-06 19:26:24.182824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.325 [2024-12-06 19:26:24.183129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.325 [2024-12-06 19:26:24.183154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.325 [2024-12-06 19:26:24.183168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.325 [2024-12-06 19:26:24.183353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.325 [2024-12-06 19:26:24.183543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.325 [2024-12-06 19:26:24.183562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.325 [2024-12-06 19:26:24.183574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.325 [2024-12-06 19:26:24.183585] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.325 [2024-12-06 19:26:24.195880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.325 [2024-12-06 19:26:24.196193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.325 [2024-12-06 19:26:24.196219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.325 [2024-12-06 19:26:24.196233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.325 [2024-12-06 19:26:24.196419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.325 [2024-12-06 19:26:24.196609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.325 [2024-12-06 19:26:24.196632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.325 [2024-12-06 19:26:24.196645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.325 [2024-12-06 19:26:24.196657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.325 [2024-12-06 19:26:24.208953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.325 [2024-12-06 19:26:24.209291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.325 [2024-12-06 19:26:24.209316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.325 [2024-12-06 19:26:24.209330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.325 [2024-12-06 19:26:24.209515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.325 [2024-12-06 19:26:24.209705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.325 [2024-12-06 19:26:24.209734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.325 [2024-12-06 19:26:24.209765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.325 [2024-12-06 19:26:24.209776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.325 [2024-12-06 19:26:24.222037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.325 [2024-12-06 19:26:24.222364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.325 [2024-12-06 19:26:24.222389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.325 [2024-12-06 19:26:24.222403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.325 [2024-12-06 19:26:24.222587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.325 [2024-12-06 19:26:24.222805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.325 [2024-12-06 19:26:24.222825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.325 [2024-12-06 19:26:24.222839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.325 [2024-12-06 19:26:24.222851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.325 [2024-12-06 19:26:24.235028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.326 [2024-12-06 19:26:24.235361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.326 [2024-12-06 19:26:24.235385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.326 [2024-12-06 19:26:24.235399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.326 [2024-12-06 19:26:24.235585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.326 [2024-12-06 19:26:24.235802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.326 [2024-12-06 19:26:24.235823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.326 [2024-12-06 19:26:24.235835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.326 [2024-12-06 19:26:24.235855] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.326 [2024-12-06 19:26:24.248099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.326 [2024-12-06 19:26:24.248397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.326 [2024-12-06 19:26:24.248422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.326 [2024-12-06 19:26:24.248436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.326 [2024-12-06 19:26:24.248626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.326 [2024-12-06 19:26:24.248846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.326 [2024-12-06 19:26:24.248866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.326 [2024-12-06 19:26:24.248879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.326 [2024-12-06 19:26:24.248891] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.326 7447.00 IOPS, 29.09 MiB/s [2024-12-06T18:26:24.375Z] [2024-12-06 19:26:24.261104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.326 [2024-12-06 19:26:24.261412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.326 [2024-12-06 19:26:24.261437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.326 [2024-12-06 19:26:24.261451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.326 [2024-12-06 19:26:24.261636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.326 [2024-12-06 19:26:24.261856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.326 [2024-12-06 19:26:24.261877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.326 [2024-12-06 19:26:24.261890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.326 [2024-12-06 19:26:24.261902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.326 [2024-12-06 19:26:24.274284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.326 [2024-12-06 19:26:24.274637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.326 [2024-12-06 19:26:24.274686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.326 [2024-12-06 19:26:24.274700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.326 [2024-12-06 19:26:24.274933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.326 [2024-12-06 19:26:24.275149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.326 [2024-12-06 19:26:24.275168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.326 [2024-12-06 19:26:24.275180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.326 [2024-12-06 19:26:24.275192] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.326 [2024-12-06 19:26:24.287277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.326 [2024-12-06 19:26:24.287609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.326 [2024-12-06 19:26:24.287661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.326 [2024-12-06 19:26:24.287675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.326 [2024-12-06 19:26:24.287894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.326 [2024-12-06 19:26:24.288107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.326 [2024-12-06 19:26:24.288126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.326 [2024-12-06 19:26:24.288139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.326 [2024-12-06 19:26:24.288150] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.326 [2024-12-06 19:26:24.300377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.326 [2024-12-06 19:26:24.300706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.326 [2024-12-06 19:26:24.300739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.326 [2024-12-06 19:26:24.300755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.326 [2024-12-06 19:26:24.300940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.326 [2024-12-06 19:26:24.301130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.326 [2024-12-06 19:26:24.301149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.326 [2024-12-06 19:26:24.301161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.326 [2024-12-06 19:26:24.301173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.326 [2024-12-06 19:26:24.313417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.326 [2024-12-06 19:26:24.313748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.326 [2024-12-06 19:26:24.313773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.326 [2024-12-06 19:26:24.313787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.326 [2024-12-06 19:26:24.313973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.326 [2024-12-06 19:26:24.314163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.326 [2024-12-06 19:26:24.314182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.326 [2024-12-06 19:26:24.314194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.326 [2024-12-06 19:26:24.314205] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.326 [2024-12-06 19:26:24.326413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.326 [2024-12-06 19:26:24.326727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.326 [2024-12-06 19:26:24.326767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.326 [2024-12-06 19:26:24.326782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.326 [2024-12-06 19:26:24.326977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.326 [2024-12-06 19:26:24.327184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.326 [2024-12-06 19:26:24.327203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.326 [2024-12-06 19:26:24.327215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.326 [2024-12-06 19:26:24.327227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.326 [2024-12-06 19:26:24.339600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.326 [2024-12-06 19:26:24.339965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.326 [2024-12-06 19:26:24.339991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.326 [2024-12-06 19:26:24.340005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.326 [2024-12-06 19:26:24.340205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.326 [2024-12-06 19:26:24.340396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.326 [2024-12-06 19:26:24.340415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.326 [2024-12-06 19:26:24.340427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.326 [2024-12-06 19:26:24.340439] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.326 [2024-12-06 19:26:24.352851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.326 [2024-12-06 19:26:24.353221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.326 [2024-12-06 19:26:24.353246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.326 [2024-12-06 19:26:24.353260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.326 [2024-12-06 19:26:24.353445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.326 [2024-12-06 19:26:24.353643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.326 [2024-12-06 19:26:24.353663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.326 [2024-12-06 19:26:24.353675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.326 [2024-12-06 19:26:24.353687] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.326 [2024-12-06 19:26:24.366269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.327 [2024-12-06 19:26:24.366579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.327 [2024-12-06 19:26:24.366605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.327 [2024-12-06 19:26:24.366621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.327 [2024-12-06 19:26:24.366841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.327 [2024-12-06 19:26:24.367057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.327 [2024-12-06 19:26:24.367095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.327 [2024-12-06 19:26:24.367108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.327 [2024-12-06 19:26:24.367120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.585 [2024-12-06 19:26:24.380000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.585 [2024-12-06 19:26:24.380353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.585 [2024-12-06 19:26:24.380381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.585 [2024-12-06 19:26:24.380397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.585 [2024-12-06 19:26:24.380620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.585 [2024-12-06 19:26:24.380865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.585 [2024-12-06 19:26:24.380888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.585 [2024-12-06 19:26:24.380902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.585 [2024-12-06 19:26:24.380916] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.585 [2024-12-06 19:26:24.393337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.585 [2024-12-06 19:26:24.393673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.585 [2024-12-06 19:26:24.393698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.585 [2024-12-06 19:26:24.393750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.585 [2024-12-06 19:26:24.393968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.585 [2024-12-06 19:26:24.394206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.585 [2024-12-06 19:26:24.394226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.585 [2024-12-06 19:26:24.394239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.585 [2024-12-06 19:26:24.394251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.585 [2024-12-06 19:26:24.406928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.585 [2024-12-06 19:26:24.407363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.585 [2024-12-06 19:26:24.407400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.585 [2024-12-06 19:26:24.407415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.585 [2024-12-06 19:26:24.407611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.585 [2024-12-06 19:26:24.407883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.585 [2024-12-06 19:26:24.407906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.585 [2024-12-06 19:26:24.407920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.585 [2024-12-06 19:26:24.407938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.585 [2024-12-06 19:26:24.420574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.585 [2024-12-06 19:26:24.420931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.585 [2024-12-06 19:26:24.420960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.585 [2024-12-06 19:26:24.420976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.585 [2024-12-06 19:26:24.421206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.585 [2024-12-06 19:26:24.421402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.585 [2024-12-06 19:26:24.421421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.585 [2024-12-06 19:26:24.421434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.586 [2024-12-06 19:26:24.421446] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.586 [2024-12-06 19:26:24.433946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.586 [2024-12-06 19:26:24.434332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.586 [2024-12-06 19:26:24.434357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.586 [2024-12-06 19:26:24.434371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.586 [2024-12-06 19:26:24.434557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.586 [2024-12-06 19:26:24.434784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.586 [2024-12-06 19:26:24.434807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.586 [2024-12-06 19:26:24.434821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.586 [2024-12-06 19:26:24.434835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.586 [2024-12-06 19:26:24.447230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.586 [2024-12-06 19:26:24.447597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.586 [2024-12-06 19:26:24.447623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.586 [2024-12-06 19:26:24.447637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.586 [2024-12-06 19:26:24.447894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.586 [2024-12-06 19:26:24.448143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.586 [2024-12-06 19:26:24.448163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.586 [2024-12-06 19:26:24.448176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.586 [2024-12-06 19:26:24.448188] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.586 [2024-12-06 19:26:24.460950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.586 [2024-12-06 19:26:24.461435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.586 [2024-12-06 19:26:24.461492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.586 [2024-12-06 19:26:24.461508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.586 [2024-12-06 19:26:24.461730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.586 [2024-12-06 19:26:24.461964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.586 [2024-12-06 19:26:24.461986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.586 [2024-12-06 19:26:24.462000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.586 [2024-12-06 19:26:24.462030] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.586 [2024-12-06 19:26:24.474309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.586 [2024-12-06 19:26:24.474659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.586 [2024-12-06 19:26:24.474684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.586 [2024-12-06 19:26:24.474698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.586 [2024-12-06 19:26:24.474964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.586 [2024-12-06 19:26:24.475184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.586 [2024-12-06 19:26:24.475204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.586 [2024-12-06 19:26:24.475216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.586 [2024-12-06 19:26:24.475227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.586 [2024-12-06 19:26:24.487712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.586 [2024-12-06 19:26:24.488099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.586 [2024-12-06 19:26:24.488131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.586 [2024-12-06 19:26:24.488145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.586 [2024-12-06 19:26:24.488331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.586 [2024-12-06 19:26:24.488529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.586 [2024-12-06 19:26:24.488548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.586 [2024-12-06 19:26:24.488560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.586 [2024-12-06 19:26:24.488571] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.586 [2024-12-06 19:26:24.501009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.586 [2024-12-06 19:26:24.501394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.586 [2024-12-06 19:26:24.501419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.586 [2024-12-06 19:26:24.501433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.586 [2024-12-06 19:26:24.501623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.586 [2024-12-06 19:26:24.501857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.586 [2024-12-06 19:26:24.501879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.586 [2024-12-06 19:26:24.501892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.586 [2024-12-06 19:26:24.501904] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.586 [2024-12-06 19:26:24.514078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.586 [2024-12-06 19:26:24.514447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.586 [2024-12-06 19:26:24.514472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.586 [2024-12-06 19:26:24.514486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.586 [2024-12-06 19:26:24.514671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.586 [2024-12-06 19:26:24.514916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.586 [2024-12-06 19:26:24.514937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.586 [2024-12-06 19:26:24.514950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.586 [2024-12-06 19:26:24.514962] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.586 [2024-12-06 19:26:24.527210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.586 [2024-12-06 19:26:24.527622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.586 [2024-12-06 19:26:24.527657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.586 [2024-12-06 19:26:24.527671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.586 [2024-12-06 19:26:24.527907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.586 [2024-12-06 19:26:24.528122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.586 [2024-12-06 19:26:24.528142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.586 [2024-12-06 19:26:24.528154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.586 [2024-12-06 19:26:24.528165] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.586 [2024-12-06 19:26:24.540344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.586 [2024-12-06 19:26:24.540691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.586 [2024-12-06 19:26:24.540715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.586 [2024-12-06 19:26:24.540755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.586 [2024-12-06 19:26:24.540952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.586 [2024-12-06 19:26:24.541161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.586 [2024-12-06 19:26:24.541185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.586 [2024-12-06 19:26:24.541198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.586 [2024-12-06 19:26:24.541210] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.586 [2024-12-06 19:26:24.553387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.586 [2024-12-06 19:26:24.553797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.586 [2024-12-06 19:26:24.553832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.586 [2024-12-06 19:26:24.553846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.586 [2024-12-06 19:26:24.554031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.586 [2024-12-06 19:26:24.554231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.586 [2024-12-06 19:26:24.554250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.586 [2024-12-06 19:26:24.554262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.587 [2024-12-06 19:26:24.554274] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.587 [2024-12-06 19:26:24.566603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.587 [2024-12-06 19:26:24.567104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.587 [2024-12-06 19:26:24.567129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.587 [2024-12-06 19:26:24.567147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.587 [2024-12-06 19:26:24.567332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.587 [2024-12-06 19:26:24.567522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.587 [2024-12-06 19:26:24.567540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.587 [2024-12-06 19:26:24.567552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.587 [2024-12-06 19:26:24.567563] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.587 [2024-12-06 19:26:24.579668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.587 [2024-12-06 19:26:24.580052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.587 [2024-12-06 19:26:24.580087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.587 [2024-12-06 19:26:24.580101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.587 [2024-12-06 19:26:24.580286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.587 [2024-12-06 19:26:24.580486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.587 [2024-12-06 19:26:24.580505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.587 [2024-12-06 19:26:24.580516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.587 [2024-12-06 19:26:24.580528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.587 [2024-12-06 19:26:24.592842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.587 [2024-12-06 19:26:24.593172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.587 [2024-12-06 19:26:24.593208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.587 [2024-12-06 19:26:24.593222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.587 [2024-12-06 19:26:24.593408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.587 [2024-12-06 19:26:24.593627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.587 [2024-12-06 19:26:24.593647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.587 [2024-12-06 19:26:24.593659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.587 [2024-12-06 19:26:24.593671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.587 [2024-12-06 19:26:24.606145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.587 [2024-12-06 19:26:24.606606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.587 [2024-12-06 19:26:24.606659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.587 [2024-12-06 19:26:24.606673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.587 [2024-12-06 19:26:24.606927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.587 [2024-12-06 19:26:24.607175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.587 [2024-12-06 19:26:24.607197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.587 [2024-12-06 19:26:24.607225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.587 [2024-12-06 19:26:24.607239] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.587 [2024-12-06 19:26:24.619490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.587 [2024-12-06 19:26:24.619885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.587 [2024-12-06 19:26:24.619913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.587 [2024-12-06 19:26:24.619928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.587 [2024-12-06 19:26:24.620156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.587 [2024-12-06 19:26:24.620351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.587 [2024-12-06 19:26:24.620370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.587 [2024-12-06 19:26:24.620382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.587 [2024-12-06 19:26:24.620394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.587 [2024-12-06 19:26:24.632828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.587 [2024-12-06 19:26:24.633189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.587 [2024-12-06 19:26:24.633219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.587 [2024-12-06 19:26:24.633234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.848 [2024-12-06 19:26:24.633425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.848 [2024-12-06 19:26:24.633649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.848 [2024-12-06 19:26:24.633668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.848 [2024-12-06 19:26:24.633680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.848 [2024-12-06 19:26:24.633692] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.848 [2024-12-06 19:26:24.646132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.848 [2024-12-06 19:26:24.646544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.848 [2024-12-06 19:26:24.646569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.848 [2024-12-06 19:26:24.646583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.848 [2024-12-06 19:26:24.646794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.848 [2024-12-06 19:26:24.647018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.848 [2024-12-06 19:26:24.647038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.848 [2024-12-06 19:26:24.647051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.848 [2024-12-06 19:26:24.647063] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.848 [2024-12-06 19:26:24.659387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.848 [2024-12-06 19:26:24.659813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.848 [2024-12-06 19:26:24.659841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.848 [2024-12-06 19:26:24.659858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.848 [2024-12-06 19:26:24.660075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.848 [2024-12-06 19:26:24.660314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.848 [2024-12-06 19:26:24.660334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.848 [2024-12-06 19:26:24.660348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.848 [2024-12-06 19:26:24.660359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.848 [2024-12-06 19:26:24.672813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.848 [2024-12-06 19:26:24.673282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.848 [2024-12-06 19:26:24.673308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.848 [2024-12-06 19:26:24.673322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.848 [2024-12-06 19:26:24.673517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.848 [2024-12-06 19:26:24.673740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.848 [2024-12-06 19:26:24.673787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.848 [2024-12-06 19:26:24.673802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.848 [2024-12-06 19:26:24.673816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.848 [2024-12-06 19:26:24.686306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.848 [2024-12-06 19:26:24.686736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.848 [2024-12-06 19:26:24.686773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.848 [2024-12-06 19:26:24.686789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.848 [2024-12-06 19:26:24.686992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.848 [2024-12-06 19:26:24.687207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.848 [2024-12-06 19:26:24.687227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.848 [2024-12-06 19:26:24.687240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.848 [2024-12-06 19:26:24.687253] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.848 [2024-12-06 19:26:24.699565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.848 [2024-12-06 19:26:24.700017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.848 [2024-12-06 19:26:24.700044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.848 [2024-12-06 19:26:24.700059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.848 [2024-12-06 19:26:24.700260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.848 [2024-12-06 19:26:24.700449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.848 [2024-12-06 19:26:24.700468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.848 [2024-12-06 19:26:24.700480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.848 [2024-12-06 19:26:24.700491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.848 [2024-12-06 19:26:24.712816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.848 [2024-12-06 19:26:24.713266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.848 [2024-12-06 19:26:24.713317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.848 [2024-12-06 19:26:24.713331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.848 [2024-12-06 19:26:24.713516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.848 [2024-12-06 19:26:24.713731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.848 [2024-12-06 19:26:24.713752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.848 [2024-12-06 19:26:24.713768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.849 [2024-12-06 19:26:24.713781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.849 [2024-12-06 19:26:24.726032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.849 [2024-12-06 19:26:24.726464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.849 [2024-12-06 19:26:24.726489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.849 [2024-12-06 19:26:24.726503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.849 [2024-12-06 19:26:24.726688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.849 [2024-12-06 19:26:24.726933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.849 [2024-12-06 19:26:24.726956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.849 [2024-12-06 19:26:24.726970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.849 [2024-12-06 19:26:24.726983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.849 [2024-12-06 19:26:24.739313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.849 [2024-12-06 19:26:24.739735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.849 [2024-12-06 19:26:24.739764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.849 [2024-12-06 19:26:24.739779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.849 [2024-12-06 19:26:24.739982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.849 [2024-12-06 19:26:24.740198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.849 [2024-12-06 19:26:24.740216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.849 [2024-12-06 19:26:24.740228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.849 [2024-12-06 19:26:24.740240] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.849 [2024-12-06 19:26:24.752506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.849 [2024-12-06 19:26:24.752877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.849 [2024-12-06 19:26:24.752904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.849 [2024-12-06 19:26:24.752919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.849 [2024-12-06 19:26:24.753126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.849 [2024-12-06 19:26:24.753316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.849 [2024-12-06 19:26:24.753336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.849 [2024-12-06 19:26:24.753349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.849 [2024-12-06 19:26:24.753361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.849 [2024-12-06 19:26:24.765782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.849 [2024-12-06 19:26:24.766162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.849 [2024-12-06 19:26:24.766190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.849 [2024-12-06 19:26:24.766204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.849 [2024-12-06 19:26:24.766390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.849 [2024-12-06 19:26:24.766589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.849 [2024-12-06 19:26:24.766609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.849 [2024-12-06 19:26:24.766621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.849 [2024-12-06 19:26:24.766633] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.849 [2024-12-06 19:26:24.778813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.849 [2024-12-06 19:26:24.779190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.849 [2024-12-06 19:26:24.779215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.849 [2024-12-06 19:26:24.779229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.849 [2024-12-06 19:26:24.779415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.849 [2024-12-06 19:26:24.779603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.849 [2024-12-06 19:26:24.779622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.849 [2024-12-06 19:26:24.779634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.849 [2024-12-06 19:26:24.779646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.849 [2024-12-06 19:26:24.791910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.849 [2024-12-06 19:26:24.792298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.849 [2024-12-06 19:26:24.792323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.849 [2024-12-06 19:26:24.792337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.849 [2024-12-06 19:26:24.792522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.849 [2024-12-06 19:26:24.792712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.849 [2024-12-06 19:26:24.792765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.849 [2024-12-06 19:26:24.792779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.849 [2024-12-06 19:26:24.792791] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.849 [2024-12-06 19:26:24.805010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.849 [2024-12-06 19:26:24.805372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.849 [2024-12-06 19:26:24.805397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.849 [2024-12-06 19:26:24.805417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.849 [2024-12-06 19:26:24.805604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.849 [2024-12-06 19:26:24.805823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.849 [2024-12-06 19:26:24.805844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.849 [2024-12-06 19:26:24.805857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.849 [2024-12-06 19:26:24.805869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.849 [2024-12-06 19:26:24.818139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.849 [2024-12-06 19:26:24.818555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.849 [2024-12-06 19:26:24.818581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.849 [2024-12-06 19:26:24.818595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.849 [2024-12-06 19:26:24.818829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.849 [2024-12-06 19:26:24.819045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.849 [2024-12-06 19:26:24.819066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.849 [2024-12-06 19:26:24.819093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.849 [2024-12-06 19:26:24.819106] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.849 [2024-12-06 19:26:24.831285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.849 [2024-12-06 19:26:24.831687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.849 [2024-12-06 19:26:24.831712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.849 [2024-12-06 19:26:24.831750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.849 [2024-12-06 19:26:24.831943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.849 [2024-12-06 19:26:24.832150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.849 [2024-12-06 19:26:24.832171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.849 [2024-12-06 19:26:24.832183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.849 [2024-12-06 19:26:24.832195] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.849 [2024-12-06 19:26:24.844389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.849 [2024-12-06 19:26:24.844801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.849 [2024-12-06 19:26:24.844827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.849 [2024-12-06 19:26:24.844840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.849 [2024-12-06 19:26:24.845026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.849 [2024-12-06 19:26:24.845220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.849 [2024-12-06 19:26:24.845238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.849 [2024-12-06 19:26:24.845250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.850 [2024-12-06 19:26:24.845262] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.850 [2024-12-06 19:26:24.857553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.850 [2024-12-06 19:26:24.858022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.850 [2024-12-06 19:26:24.858049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.850 [2024-12-06 19:26:24.858079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.850 [2024-12-06 19:26:24.858276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.850 [2024-12-06 19:26:24.858501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.850 [2024-12-06 19:26:24.858522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.850 [2024-12-06 19:26:24.858535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.850 [2024-12-06 19:26:24.858563] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.850 [2024-12-06 19:26:24.870831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.850 [2024-12-06 19:26:24.871299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.850 [2024-12-06 19:26:24.871350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.850 [2024-12-06 19:26:24.871364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.850 [2024-12-06 19:26:24.871550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.850 [2024-12-06 19:26:24.871781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.850 [2024-12-06 19:26:24.871803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.850 [2024-12-06 19:26:24.871816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.850 [2024-12-06 19:26:24.871829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:39.850 [2024-12-06 19:26:24.884005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:39.850 [2024-12-06 19:26:24.884447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.850 [2024-12-06 19:26:24.884496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:39.850 [2024-12-06 19:26:24.884510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:39.850 [2024-12-06 19:26:24.884696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:39.850 [2024-12-06 19:26:24.884948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:39.850 [2024-12-06 19:26:24.884970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:39.850 [2024-12-06 19:26:24.884993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:39.850 [2024-12-06 19:26:24.885022] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.112 [2024-12-06 19:26:24.897265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.112 [2024-12-06 19:26:24.897698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.112 [2024-12-06 19:26:24.897753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.112 [2024-12-06 19:26:24.897768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.112 [2024-12-06 19:26:24.897973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.112 [2024-12-06 19:26:24.898180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.112 [2024-12-06 19:26:24.898201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.112 [2024-12-06 19:26:24.898213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.112 [2024-12-06 19:26:24.898225] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.112 [2024-12-06 19:26:24.910352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.112 [2024-12-06 19:26:24.910783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.112 [2024-12-06 19:26:24.910809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.112 [2024-12-06 19:26:24.910823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.112 [2024-12-06 19:26:24.911009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.112 [2024-12-06 19:26:24.911203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.112 [2024-12-06 19:26:24.911223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.112 [2024-12-06 19:26:24.911237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.112 [2024-12-06 19:26:24.911249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.112 [2024-12-06 19:26:24.923809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.112 [2024-12-06 19:26:24.924274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.112 [2024-12-06 19:26:24.924300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.112 [2024-12-06 19:26:24.924315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.112 [2024-12-06 19:26:24.924501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.112 [2024-12-06 19:26:24.924690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.112 [2024-12-06 19:26:24.924735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.112 [2024-12-06 19:26:24.924750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.112 [2024-12-06 19:26:24.924762] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.112 [2024-12-06 19:26:24.937042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.112 [2024-12-06 19:26:24.937411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.112 [2024-12-06 19:26:24.937437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.112 [2024-12-06 19:26:24.937452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.112 [2024-12-06 19:26:24.937638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.112 [2024-12-06 19:26:24.937877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.112 [2024-12-06 19:26:24.937900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.112 [2024-12-06 19:26:24.937913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.112 [2024-12-06 19:26:24.937925] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.112 [2024-12-06 19:26:24.950195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.112 [2024-12-06 19:26:24.950594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.112 [2024-12-06 19:26:24.950620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.112 [2024-12-06 19:26:24.950633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.112 [2024-12-06 19:26:24.950865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.112 [2024-12-06 19:26:24.951067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.112 [2024-12-06 19:26:24.951088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.112 [2024-12-06 19:26:24.951116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.112 [2024-12-06 19:26:24.951129] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.112 [2024-12-06 19:26:24.963279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.112 [2024-12-06 19:26:24.963687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.112 [2024-12-06 19:26:24.963713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.112 [2024-12-06 19:26:24.963752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.112 [2024-12-06 19:26:24.963964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.112 [2024-12-06 19:26:24.964176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.112 [2024-12-06 19:26:24.964196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.112 [2024-12-06 19:26:24.964209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.112 [2024-12-06 19:26:24.964220] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.112 [2024-12-06 19:26:24.976396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.112 [2024-12-06 19:26:24.976760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.112 [2024-12-06 19:26:24.976786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.112 [2024-12-06 19:26:24.976805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.112 [2024-12-06 19:26:24.976992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.112 [2024-12-06 19:26:24.977181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.112 [2024-12-06 19:26:24.977201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.112 [2024-12-06 19:26:24.977215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.112 [2024-12-06 19:26:24.977227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.112 [2024-12-06 19:26:24.989525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.112 [2024-12-06 19:26:24.989943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.112 [2024-12-06 19:26:24.989970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.112 [2024-12-06 19:26:24.989985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.112 [2024-12-06 19:26:24.990171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.112 [2024-12-06 19:26:24.990360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.112 [2024-12-06 19:26:24.990380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.112 [2024-12-06 19:26:24.990392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.113 [2024-12-06 19:26:24.990404] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.113 [2024-12-06 19:26:25.002665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.113 [2024-12-06 19:26:25.003032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.113 [2024-12-06 19:26:25.003068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.113 [2024-12-06 19:26:25.003082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.113 [2024-12-06 19:26:25.003268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.113 [2024-12-06 19:26:25.003457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.113 [2024-12-06 19:26:25.003475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.113 [2024-12-06 19:26:25.003487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.113 [2024-12-06 19:26:25.003500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.113 [2024-12-06 19:26:25.015826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.113 [2024-12-06 19:26:25.016248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.113 [2024-12-06 19:26:25.016300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.113 [2024-12-06 19:26:25.016314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.113 [2024-12-06 19:26:25.016500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.113 [2024-12-06 19:26:25.016693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.113 [2024-12-06 19:26:25.016712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.113 [2024-12-06 19:26:25.016751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.113 [2024-12-06 19:26:25.016766] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.113 [2024-12-06 19:26:25.028929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.113 [2024-12-06 19:26:25.029351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.113 [2024-12-06 19:26:25.029402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.113 [2024-12-06 19:26:25.029416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.113 [2024-12-06 19:26:25.029601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.113 [2024-12-06 19:26:25.029820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.113 [2024-12-06 19:26:25.029840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.113 [2024-12-06 19:26:25.029853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.113 [2024-12-06 19:26:25.029864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.113 [2024-12-06 19:26:25.042110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.113 [2024-12-06 19:26:25.042469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.113 [2024-12-06 19:26:25.042506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.113 [2024-12-06 19:26:25.042519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.113 [2024-12-06 19:26:25.042706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.113 [2024-12-06 19:26:25.042927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.113 [2024-12-06 19:26:25.042949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.113 [2024-12-06 19:26:25.042962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.113 [2024-12-06 19:26:25.042974] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.113 [2024-12-06 19:26:25.055236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.113 [2024-12-06 19:26:25.055657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.113 [2024-12-06 19:26:25.055706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.113 [2024-12-06 19:26:25.055729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.113 [2024-12-06 19:26:25.055944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.113 [2024-12-06 19:26:25.056161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.113 [2024-12-06 19:26:25.056181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.113 [2024-12-06 19:26:25.056199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.113 [2024-12-06 19:26:25.056212] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.113 [2024-12-06 19:26:25.068370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.113 [2024-12-06 19:26:25.068776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.113 [2024-12-06 19:26:25.068802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.113 [2024-12-06 19:26:25.068817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.113 [2024-12-06 19:26:25.069003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.113 [2024-12-06 19:26:25.069193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.113 [2024-12-06 19:26:25.069213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.113 [2024-12-06 19:26:25.069225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.113 [2024-12-06 19:26:25.069236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.113 [2024-12-06 19:26:25.081477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.113 [2024-12-06 19:26:25.081835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.113 [2024-12-06 19:26:25.081861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.113 [2024-12-06 19:26:25.081875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.113 [2024-12-06 19:26:25.082060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.113 [2024-12-06 19:26:25.082249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.113 [2024-12-06 19:26:25.082267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.113 [2024-12-06 19:26:25.082281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.113 [2024-12-06 19:26:25.082293] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.113 [2024-12-06 19:26:25.094567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.113 [2024-12-06 19:26:25.094987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.113 [2024-12-06 19:26:25.095012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.113 [2024-12-06 19:26:25.095025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.113 [2024-12-06 19:26:25.095211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.113 [2024-12-06 19:26:25.095412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.113 [2024-12-06 19:26:25.095432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.113 [2024-12-06 19:26:25.095446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.113 [2024-12-06 19:26:25.095458] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.113 [2024-12-06 19:26:25.107732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.113 [2024-12-06 19:26:25.108119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.113 [2024-12-06 19:26:25.108145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.113 [2024-12-06 19:26:25.108161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.113 [2024-12-06 19:26:25.108358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.113 [2024-12-06 19:26:25.108585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.113 [2024-12-06 19:26:25.108608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.113 [2024-12-06 19:26:25.108636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.113 [2024-12-06 19:26:25.108651] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.113 [2024-12-06 19:26:25.120990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.113 [2024-12-06 19:26:25.121391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.113 [2024-12-06 19:26:25.121417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.113 [2024-12-06 19:26:25.121432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.113 [2024-12-06 19:26:25.121618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.113 [2024-12-06 19:26:25.121858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.114 [2024-12-06 19:26:25.121881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.114 [2024-12-06 19:26:25.121894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.114 [2024-12-06 19:26:25.121907] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.114 [2024-12-06 19:26:25.134147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.114 [2024-12-06 19:26:25.134530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.114 [2024-12-06 19:26:25.134556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.114 [2024-12-06 19:26:25.134570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.114 [2024-12-06 19:26:25.134799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.114 [2024-12-06 19:26:25.135001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.114 [2024-12-06 19:26:25.135036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.114 [2024-12-06 19:26:25.135049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.114 [2024-12-06 19:26:25.135062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.114 [2024-12-06 19:26:25.147265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.114 [2024-12-06 19:26:25.147676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.114 [2024-12-06 19:26:25.147702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.114 [2024-12-06 19:26:25.147731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.114 [2024-12-06 19:26:25.147941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.114 [2024-12-06 19:26:25.148148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.114 [2024-12-06 19:26:25.148169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.114 [2024-12-06 19:26:25.148181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.114 [2024-12-06 19:26:25.148192] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.374 [2024-12-06 19:26:25.160602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.374 [2024-12-06 19:26:25.161046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.374 [2024-12-06 19:26:25.161072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.374 [2024-12-06 19:26:25.161086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.374 [2024-12-06 19:26:25.161273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.374 [2024-12-06 19:26:25.161462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.374 [2024-12-06 19:26:25.161482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.374 [2024-12-06 19:26:25.161495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.374 [2024-12-06 19:26:25.161507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.374 [2024-12-06 19:26:25.173770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.374 [2024-12-06 19:26:25.174186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.374 [2024-12-06 19:26:25.174211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.374 [2024-12-06 19:26:25.174225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.374 [2024-12-06 19:26:25.174411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.374 [2024-12-06 19:26:25.174600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.374 [2024-12-06 19:26:25.174618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.374 [2024-12-06 19:26:25.174631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.374 [2024-12-06 19:26:25.174643] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.374 [2024-12-06 19:26:25.186887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.374 [2024-12-06 19:26:25.187316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.374 [2024-12-06 19:26:25.187341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.374 [2024-12-06 19:26:25.187354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.374 [2024-12-06 19:26:25.187540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.374 [2024-12-06 19:26:25.187761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.374 [2024-12-06 19:26:25.187781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.374 [2024-12-06 19:26:25.187794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.374 [2024-12-06 19:26:25.187807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.374 [2024-12-06 19:26:25.200047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.374 [2024-12-06 19:26:25.200453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.374 [2024-12-06 19:26:25.200479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.374 [2024-12-06 19:26:25.200493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.374 [2024-12-06 19:26:25.200680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.374 [2024-12-06 19:26:25.200906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.374 [2024-12-06 19:26:25.200929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.374 [2024-12-06 19:26:25.200942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.374 [2024-12-06 19:26:25.200954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.374 [2024-12-06 19:26:25.213166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.374 [2024-12-06 19:26:25.213548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.374 [2024-12-06 19:26:25.213574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.374 [2024-12-06 19:26:25.213588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.374 [2024-12-06 19:26:25.213819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.374 [2024-12-06 19:26:25.214020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.374 [2024-12-06 19:26:25.214042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.374 [2024-12-06 19:26:25.214055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.374 [2024-12-06 19:26:25.214068] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.374 [2024-12-06 19:26:25.226264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.374 [2024-12-06 19:26:25.226674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.374 [2024-12-06 19:26:25.226699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.374 [2024-12-06 19:26:25.226740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.374 [2024-12-06 19:26:25.226969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.374 [2024-12-06 19:26:25.227194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.374 [2024-12-06 19:26:25.227215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.374 [2024-12-06 19:26:25.227232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.374 [2024-12-06 19:26:25.227245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.374 [2024-12-06 19:26:25.239267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.374 [2024-12-06 19:26:25.239638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.374 [2024-12-06 19:26:25.239663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.374 [2024-12-06 19:26:25.239678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.374 [2024-12-06 19:26:25.239914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.374 [2024-12-06 19:26:25.240130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.374 [2024-12-06 19:26:25.240150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.375 [2024-12-06 19:26:25.240162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.375 [2024-12-06 19:26:25.240174] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.375 [2024-12-06 19:26:25.252435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.375 [2024-12-06 19:26:25.252827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.375 [2024-12-06 19:26:25.252852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.375 [2024-12-06 19:26:25.252866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.375 [2024-12-06 19:26:25.253052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.375 [2024-12-06 19:26:25.253241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.375 [2024-12-06 19:26:25.253259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.375 [2024-12-06 19:26:25.253271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.375 [2024-12-06 19:26:25.253282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.375 5585.25 IOPS, 21.82 MiB/s [2024-12-06T18:26:25.424Z] [2024-12-06 19:26:25.265436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.375 [2024-12-06 19:26:25.265829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.375 [2024-12-06 19:26:25.265855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.375 [2024-12-06 19:26:25.265869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.375 [2024-12-06 19:26:25.266055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.375 [2024-12-06 19:26:25.266245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.375 [2024-12-06 19:26:25.266263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.375 [2024-12-06 19:26:25.266276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.375 [2024-12-06 19:26:25.266288] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.375 [2024-12-06 19:26:25.278447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.375 [2024-12-06 19:26:25.278857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.375 [2024-12-06 19:26:25.278883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.375 [2024-12-06 19:26:25.278896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.375 [2024-12-06 19:26:25.279093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.375 [2024-12-06 19:26:25.279282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.375 [2024-12-06 19:26:25.279300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.375 [2024-12-06 19:26:25.279314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.375 [2024-12-06 19:26:25.279325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.375 [2024-12-06 19:26:25.291562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.375 [2024-12-06 19:26:25.291945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.375 [2024-12-06 19:26:25.291970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.375 [2024-12-06 19:26:25.291984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.375 [2024-12-06 19:26:25.292170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.375 [2024-12-06 19:26:25.292360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.375 [2024-12-06 19:26:25.292378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.375 [2024-12-06 19:26:25.292391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.375 [2024-12-06 19:26:25.292402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.375 [2024-12-06 19:26:25.304768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.375 [2024-12-06 19:26:25.305175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.375 [2024-12-06 19:26:25.305200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.375 [2024-12-06 19:26:25.305215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.375 [2024-12-06 19:26:25.305400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.375 [2024-12-06 19:26:25.305590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.375 [2024-12-06 19:26:25.305608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.375 [2024-12-06 19:26:25.305620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.375 [2024-12-06 19:26:25.305632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.375 [2024-12-06 19:26:25.317892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.375 [2024-12-06 19:26:25.318294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.375 [2024-12-06 19:26:25.318319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.375 [2024-12-06 19:26:25.318338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.375 [2024-12-06 19:26:25.318524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.375 [2024-12-06 19:26:25.318713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.375 [2024-12-06 19:26:25.318757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.375 [2024-12-06 19:26:25.318771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.375 [2024-12-06 19:26:25.318784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.375 [2024-12-06 19:26:25.331221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.375 [2024-12-06 19:26:25.331642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.375 [2024-12-06 19:26:25.331682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.375 [2024-12-06 19:26:25.331697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.375 [2024-12-06 19:26:25.331942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.375 [2024-12-06 19:26:25.332186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.375 [2024-12-06 19:26:25.332206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.375 [2024-12-06 19:26:25.332219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.375 [2024-12-06 19:26:25.332231] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.375 [2024-12-06 19:26:25.344324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.375 [2024-12-06 19:26:25.344731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.375 [2024-12-06 19:26:25.344772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.375 [2024-12-06 19:26:25.344787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.375 [2024-12-06 19:26:25.344979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.375 [2024-12-06 19:26:25.345185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.375 [2024-12-06 19:26:25.345205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.375 [2024-12-06 19:26:25.345218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.375 [2024-12-06 19:26:25.345230] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.375 [2024-12-06 19:26:25.357433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.375 [2024-12-06 19:26:25.357833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.375 [2024-12-06 19:26:25.357883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.375 [2024-12-06 19:26:25.357898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.375 [2024-12-06 19:26:25.358121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.375 [2024-12-06 19:26:25.358341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.375 [2024-12-06 19:26:25.358377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.375 [2024-12-06 19:26:25.358389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.375 [2024-12-06 19:26:25.358402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.375 [2024-12-06 19:26:25.370668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.375 [2024-12-06 19:26:25.371122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.375 [2024-12-06 19:26:25.371174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.375 [2024-12-06 19:26:25.371189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.375 [2024-12-06 19:26:25.371375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.375 [2024-12-06 19:26:25.371564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.375 [2024-12-06 19:26:25.371582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.375 [2024-12-06 19:26:25.371595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.375 [2024-12-06 19:26:25.371607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.375 [2024-12-06 19:26:25.384024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.375 [2024-12-06 19:26:25.384392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.375 [2024-12-06 19:26:25.384418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.375 [2024-12-06 19:26:25.384433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.375 [2024-12-06 19:26:25.384620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.375 [2024-12-06 19:26:25.384861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.375 [2024-12-06 19:26:25.384883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.375 [2024-12-06 19:26:25.384896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.375 [2024-12-06 19:26:25.384909] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.375 [2024-12-06 19:26:25.397462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.375 [2024-12-06 19:26:25.397822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.375 [2024-12-06 19:26:25.397849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.375 [2024-12-06 19:26:25.397865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.375 [2024-12-06 19:26:25.398073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.375 [2024-12-06 19:26:25.398282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.375 [2024-12-06 19:26:25.398303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.375 [2024-12-06 19:26:25.398320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.375 [2024-12-06 19:26:25.398332] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.375 [2024-12-06 19:26:25.410948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.375 [2024-12-06 19:26:25.411408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.375 [2024-12-06 19:26:25.411459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.376 [2024-12-06 19:26:25.411473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.376 [2024-12-06 19:26:25.411659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.376 [2024-12-06 19:26:25.411900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.376 [2024-12-06 19:26:25.411924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.376 [2024-12-06 19:26:25.411938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.376 [2024-12-06 19:26:25.411952] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.633 [2024-12-06 19:26:25.424450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.633 [2024-12-06 19:26:25.424845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.633 [2024-12-06 19:26:25.424874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.633 [2024-12-06 19:26:25.424890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.633 [2024-12-06 19:26:25.425105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.633 [2024-12-06 19:26:25.425296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.633 [2024-12-06 19:26:25.425316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.633 [2024-12-06 19:26:25.425329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.633 [2024-12-06 19:26:25.425342] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.633 [2024-12-06 19:26:25.437663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.633 [2024-12-06 19:26:25.438125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.633 [2024-12-06 19:26:25.438174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.633 [2024-12-06 19:26:25.438189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.633 [2024-12-06 19:26:25.438374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.633 [2024-12-06 19:26:25.438564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.633 [2024-12-06 19:26:25.438582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.633 [2024-12-06 19:26:25.438595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.633 [2024-12-06 19:26:25.438607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.633 [2024-12-06 19:26:25.450755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.633 [2024-12-06 19:26:25.451161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.634 [2024-12-06 19:26:25.451187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.634 [2024-12-06 19:26:25.451201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.634 [2024-12-06 19:26:25.451388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.634 [2024-12-06 19:26:25.451577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.634 [2024-12-06 19:26:25.451597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.634 [2024-12-06 19:26:25.451609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.634 [2024-12-06 19:26:25.451621] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.634 [2024-12-06 19:26:25.463881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.634 [2024-12-06 19:26:25.464281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.634 [2024-12-06 19:26:25.464306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.634 [2024-12-06 19:26:25.464320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.634 [2024-12-06 19:26:25.464506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.634 [2024-12-06 19:26:25.464695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.634 [2024-12-06 19:26:25.464713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.634 [2024-12-06 19:26:25.464752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.634 [2024-12-06 19:26:25.464768] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.634 [2024-12-06 19:26:25.477395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.634 [2024-12-06 19:26:25.477774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.634 [2024-12-06 19:26:25.477804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.634 [2024-12-06 19:26:25.477821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.634 [2024-12-06 19:26:25.478055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.634 [2024-12-06 19:26:25.478287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.634 [2024-12-06 19:26:25.478308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.634 [2024-12-06 19:26:25.478322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.634 [2024-12-06 19:26:25.478335] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.634 [2024-12-06 19:26:25.490819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.634 [2024-12-06 19:26:25.491297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.634 [2024-12-06 19:26:25.491344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.634 [2024-12-06 19:26:25.491363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.634 [2024-12-06 19:26:25.491571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.634 [2024-12-06 19:26:25.491809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.634 [2024-12-06 19:26:25.491842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.634 [2024-12-06 19:26:25.491858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.634 [2024-12-06 19:26:25.491871] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.634 [2024-12-06 19:26:25.504506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.634 [2024-12-06 19:26:25.504973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.634 [2024-12-06 19:26:25.505003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.634 [2024-12-06 19:26:25.505035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.634 [2024-12-06 19:26:25.505262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.634 [2024-12-06 19:26:25.505462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.634 [2024-12-06 19:26:25.505483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.634 [2024-12-06 19:26:25.505497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.634 [2024-12-06 19:26:25.505509] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.634 [2024-12-06 19:26:25.518078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.634 [2024-12-06 19:26:25.518465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.634 [2024-12-06 19:26:25.518507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.634 [2024-12-06 19:26:25.518523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.634 [2024-12-06 19:26:25.518748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.634 [2024-12-06 19:26:25.518970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.634 [2024-12-06 19:26:25.518993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.634 [2024-12-06 19:26:25.519023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.634 [2024-12-06 19:26:25.519037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.634 [2024-12-06 19:26:25.531663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.634 [2024-12-06 19:26:25.532079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.634 [2024-12-06 19:26:25.532105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.634 [2024-12-06 19:26:25.532120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.634 [2024-12-06 19:26:25.532316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.634 [2024-12-06 19:26:25.532517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.634 [2024-12-06 19:26:25.532546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.634 [2024-12-06 19:26:25.532561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.634 [2024-12-06 19:26:25.532573] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.634 [2024-12-06 19:26:25.545277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.634 [2024-12-06 19:26:25.545671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.634 [2024-12-06 19:26:25.545717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.634 [2024-12-06 19:26:25.545744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.634 [2024-12-06 19:26:25.545962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.634 [2024-12-06 19:26:25.546192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.634 [2024-12-06 19:26:25.546211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.634 [2024-12-06 19:26:25.546223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.634 [2024-12-06 19:26:25.546235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.634 [2024-12-06 19:26:25.558730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.634 [2024-12-06 19:26:25.559146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.634 [2024-12-06 19:26:25.559172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.634 [2024-12-06 19:26:25.559186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.634 [2024-12-06 19:26:25.559372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.634 [2024-12-06 19:26:25.559562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.634 [2024-12-06 19:26:25.559581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.634 [2024-12-06 19:26:25.559594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.634 [2024-12-06 19:26:25.559606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.634 [2024-12-06 19:26:25.572078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.634 [2024-12-06 19:26:25.572394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.634 [2024-12-06 19:26:25.572419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.634 [2024-12-06 19:26:25.572433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.634 [2024-12-06 19:26:25.572619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.634 [2024-12-06 19:26:25.572839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.634 [2024-12-06 19:26:25.572860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.634 [2024-12-06 19:26:25.572873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.634 [2024-12-06 19:26:25.572890] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.635 [2024-12-06 19:26:25.585345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.635 [2024-12-06 19:26:25.585802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.635 [2024-12-06 19:26:25.585828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.635 [2024-12-06 19:26:25.585843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.635 [2024-12-06 19:26:25.586036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.635 [2024-12-06 19:26:25.586239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.635 [2024-12-06 19:26:25.586267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.635 [2024-12-06 19:26:25.586279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.635 [2024-12-06 19:26:25.586291] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.635 [2024-12-06 19:26:25.598473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.635 [2024-12-06 19:26:25.598829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.635 [2024-12-06 19:26:25.598854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.635 [2024-12-06 19:26:25.598868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.635 [2024-12-06 19:26:25.599054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.635 [2024-12-06 19:26:25.599242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.635 [2024-12-06 19:26:25.599263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.635 [2024-12-06 19:26:25.599275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.635 [2024-12-06 19:26:25.599287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.635 [2024-12-06 19:26:25.611692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.635 [2024-12-06 19:26:25.612106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.635 [2024-12-06 19:26:25.612163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.635 [2024-12-06 19:26:25.612178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.635 [2024-12-06 19:26:25.612394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.635 [2024-12-06 19:26:25.612631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.635 [2024-12-06 19:26:25.612653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.635 [2024-12-06 19:26:25.612672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.635 [2024-12-06 19:26:25.612687] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.635 [2024-12-06 19:26:25.624951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.635 [2024-12-06 19:26:25.625365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.635 [2024-12-06 19:26:25.625413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.635 [2024-12-06 19:26:25.625428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.635 [2024-12-06 19:26:25.625614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.635 [2024-12-06 19:26:25.625851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.635 [2024-12-06 19:26:25.625873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.635 [2024-12-06 19:26:25.625886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.635 [2024-12-06 19:26:25.625899] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.635 [2024-12-06 19:26:25.638143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.635 [2024-12-06 19:26:25.638518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.635 [2024-12-06 19:26:25.638544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.635 [2024-12-06 19:26:25.638558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.635 [2024-12-06 19:26:25.638788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.635 [2024-12-06 19:26:25.639011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.635 [2024-12-06 19:26:25.639032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.635 [2024-12-06 19:26:25.639045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.635 [2024-12-06 19:26:25.639058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.635 [2024-12-06 19:26:25.651293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.635 [2024-12-06 19:26:25.651685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.635 [2024-12-06 19:26:25.651711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.635 [2024-12-06 19:26:25.651749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.635 [2024-12-06 19:26:25.651949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.635 [2024-12-06 19:26:25.652158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.635 [2024-12-06 19:26:25.652178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.635 [2024-12-06 19:26:25.652190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.635 [2024-12-06 19:26:25.652203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.635 [2024-12-06 19:26:25.664322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.635 [2024-12-06 19:26:25.664705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.635 [2024-12-06 19:26:25.664749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.635 [2024-12-06 19:26:25.664766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.635 [2024-12-06 19:26:25.664966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.635 [2024-12-06 19:26:25.665173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.635 [2024-12-06 19:26:25.665193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.635 [2024-12-06 19:26:25.665206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.635 [2024-12-06 19:26:25.665218] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.635 [2024-12-06 19:26:25.677454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.635 [2024-12-06 19:26:25.677808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.635 [2024-12-06 19:26:25.677834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.635 [2024-12-06 19:26:25.677848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.635 [2024-12-06 19:26:25.678033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.635 [2024-12-06 19:26:25.678223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.635 [2024-12-06 19:26:25.678258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.635 [2024-12-06 19:26:25.678270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.635 [2024-12-06 19:26:25.678284] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.895 [2024-12-06 19:26:25.690840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.895 [2024-12-06 19:26:25.691298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.895 [2024-12-06 19:26:25.691324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.895 [2024-12-06 19:26:25.691338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.895 [2024-12-06 19:26:25.691530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.895 [2024-12-06 19:26:25.691757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.895 [2024-12-06 19:26:25.691794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.895 [2024-12-06 19:26:25.691808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.895 [2024-12-06 19:26:25.691821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.895 [2024-12-06 19:26:25.704324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.895 [2024-12-06 19:26:25.704704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.895 [2024-12-06 19:26:25.704752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.895 [2024-12-06 19:26:25.704775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.895 [2024-12-06 19:26:25.704971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.895 [2024-12-06 19:26:25.705183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.895 [2024-12-06 19:26:25.705209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.895 [2024-12-06 19:26:25.705223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.895 [2024-12-06 19:26:25.705235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.895 [2024-12-06 19:26:25.717609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.895 [2024-12-06 19:26:25.718025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.895 [2024-12-06 19:26:25.718071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.895 [2024-12-06 19:26:25.718085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.895 [2024-12-06 19:26:25.718277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.895 [2024-12-06 19:26:25.718471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.895 [2024-12-06 19:26:25.718491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.895 [2024-12-06 19:26:25.718504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.895 [2024-12-06 19:26:25.718516] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.895 [2024-12-06 19:26:25.730875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.895 [2024-12-06 19:26:25.731269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.895 [2024-12-06 19:26:25.731297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.895 [2024-12-06 19:26:25.731312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.895 [2024-12-06 19:26:25.731520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.895 [2024-12-06 19:26:25.731741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.895 [2024-12-06 19:26:25.731763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.895 [2024-12-06 19:26:25.731777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.895 [2024-12-06 19:26:25.731790] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.895 [2024-12-06 19:26:25.744223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.895 [2024-12-06 19:26:25.744555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.895 [2024-12-06 19:26:25.744581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.895 [2024-12-06 19:26:25.744595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.895 [2024-12-06 19:26:25.744820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.895 [2024-12-06 19:26:25.745052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.895 [2024-12-06 19:26:25.745087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.895 [2024-12-06 19:26:25.745101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.895 [2024-12-06 19:26:25.745117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.895 [2024-12-06 19:26:25.757546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.895 [2024-12-06 19:26:25.757880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.895 [2024-12-06 19:26:25.757908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.895 [2024-12-06 19:26:25.757924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.895 [2024-12-06 19:26:25.758150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.895 [2024-12-06 19:26:25.758346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.895 [2024-12-06 19:26:25.758365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.895 [2024-12-06 19:26:25.758378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.895 [2024-12-06 19:26:25.758390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.895 [2024-12-06 19:26:25.770804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.895 [2024-12-06 19:26:25.771168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.895 [2024-12-06 19:26:25.771193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.895 [2024-12-06 19:26:25.771208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.895 [2024-12-06 19:26:25.771399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.895 [2024-12-06 19:26:25.771595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.895 [2024-12-06 19:26:25.771614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.895 [2024-12-06 19:26:25.771626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.895 [2024-12-06 19:26:25.771638] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.895 [2024-12-06 19:26:25.784101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.895 [2024-12-06 19:26:25.784507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.895 [2024-12-06 19:26:25.784540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.895 [2024-12-06 19:26:25.784554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.895 [2024-12-06 19:26:25.784790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.895 [2024-12-06 19:26:25.784997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.895 [2024-12-06 19:26:25.785019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.895 [2024-12-06 19:26:25.785032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.895 [2024-12-06 19:26:25.785045] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.895 [2024-12-06 19:26:25.797417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.895 [2024-12-06 19:26:25.797803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.895 [2024-12-06 19:26:25.797845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.895 [2024-12-06 19:26:25.797861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.895 [2024-12-06 19:26:25.798072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.895 [2024-12-06 19:26:25.798268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.895 [2024-12-06 19:26:25.798289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.895 [2024-12-06 19:26:25.798302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.895 [2024-12-06 19:26:25.798314] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.896 [2024-12-06 19:26:25.810717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.896 [2024-12-06 19:26:25.811116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.896 [2024-12-06 19:26:25.811142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.896 [2024-12-06 19:26:25.811157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.896 [2024-12-06 19:26:25.811348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.896 [2024-12-06 19:26:25.811543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.896 [2024-12-06 19:26:25.811563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.896 [2024-12-06 19:26:25.811575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.896 [2024-12-06 19:26:25.811586] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.896 [2024-12-06 19:26:25.824035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.896 [2024-12-06 19:26:25.824403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.896 [2024-12-06 19:26:25.824429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.896 [2024-12-06 19:26:25.824443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.896 [2024-12-06 19:26:25.824634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.896 [2024-12-06 19:26:25.824858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.896 [2024-12-06 19:26:25.824879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.896 [2024-12-06 19:26:25.824892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.896 [2024-12-06 19:26:25.824904] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.896 [2024-12-06 19:26:25.837271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.896 [2024-12-06 19:26:25.837617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.896 [2024-12-06 19:26:25.837643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.896 [2024-12-06 19:26:25.837657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.896 [2024-12-06 19:26:25.837902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.896 [2024-12-06 19:26:25.838125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.896 [2024-12-06 19:26:25.838146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.896 [2024-12-06 19:26:25.838174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.896 [2024-12-06 19:26:25.838187] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.896 [2024-12-06 19:26:25.850509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.896 [2024-12-06 19:26:25.850894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.896 [2024-12-06 19:26:25.850922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.896 [2024-12-06 19:26:25.850937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.896 [2024-12-06 19:26:25.851165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.896 [2024-12-06 19:26:25.851361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.896 [2024-12-06 19:26:25.851380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.896 [2024-12-06 19:26:25.851392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.896 [2024-12-06 19:26:25.851405] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.896 [2024-12-06 19:26:25.863748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.896 [2024-12-06 19:26:25.864200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.896 [2024-12-06 19:26:25.864226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.896 [2024-12-06 19:26:25.864242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.896 [2024-12-06 19:26:25.864446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.896 [2024-12-06 19:26:25.864671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.896 [2024-12-06 19:26:25.864692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.896 [2024-12-06 19:26:25.864705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.896 [2024-12-06 19:26:25.864718] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.896 [2024-12-06 19:26:25.877167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.896 [2024-12-06 19:26:25.877576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.896 [2024-12-06 19:26:25.877603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.896 [2024-12-06 19:26:25.877618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.896 [2024-12-06 19:26:25.877858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.896 [2024-12-06 19:26:25.878100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.896 [2024-12-06 19:26:25.878125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.896 [2024-12-06 19:26:25.878153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.896 [2024-12-06 19:26:25.878165] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.896 [2024-12-06 19:26:25.890586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.896 [2024-12-06 19:26:25.890984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.896 [2024-12-06 19:26:25.891031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.896 [2024-12-06 19:26:25.891047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.896 [2024-12-06 19:26:25.891254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.896 [2024-12-06 19:26:25.891449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.896 [2024-12-06 19:26:25.891469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.896 [2024-12-06 19:26:25.891482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.896 [2024-12-06 19:26:25.891495] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.896 [2024-12-06 19:26:25.903961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.896 [2024-12-06 19:26:25.904410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.896 [2024-12-06 19:26:25.904436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.896 [2024-12-06 19:26:25.904450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.896 [2024-12-06 19:26:25.904641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.896 [2024-12-06 19:26:25.904865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.896 [2024-12-06 19:26:25.904887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.896 [2024-12-06 19:26:25.904901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.896 [2024-12-06 19:26:25.904914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.896 [2024-12-06 19:26:25.917328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.896 [2024-12-06 19:26:25.917741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.896 [2024-12-06 19:26:25.917778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.896 [2024-12-06 19:26:25.917793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.896 [2024-12-06 19:26:25.917990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.896 [2024-12-06 19:26:25.918200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.896 [2024-12-06 19:26:25.918221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.896 [2024-12-06 19:26:25.918233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.896 [2024-12-06 19:26:25.918245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:40.896 [2024-12-06 19:26:25.930587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:40.896 [2024-12-06 19:26:25.931004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.896 [2024-12-06 19:26:25.931047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:40.896 [2024-12-06 19:26:25.931061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:40.896 [2024-12-06 19:26:25.931252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:40.896 [2024-12-06 19:26:25.931447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:40.896 [2024-12-06 19:26:25.931467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:40.896 [2024-12-06 19:26:25.931480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:40.896 [2024-12-06 19:26:25.931492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.158 [2024-12-06 19:26:25.943957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.158 [2024-12-06 19:26:25.944389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.158 [2024-12-06 19:26:25.944414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.158 [2024-12-06 19:26:25.944428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.158 [2024-12-06 19:26:25.944619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.158 [2024-12-06 19:26:25.944862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.158 [2024-12-06 19:26:25.944884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.158 [2024-12-06 19:26:25.944899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.158 [2024-12-06 19:26:25.944912] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.158 [2024-12-06 19:26:25.957322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.158 [2024-12-06 19:26:25.957729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.158 [2024-12-06 19:26:25.957765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.158 [2024-12-06 19:26:25.957780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.158 [2024-12-06 19:26:25.957978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.158 [2024-12-06 19:26:25.958187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.158 [2024-12-06 19:26:25.958208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.158 [2024-12-06 19:26:25.958220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.158 [2024-12-06 19:26:25.958232] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.158 [2024-12-06 19:26:25.970575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.158 [2024-12-06 19:26:25.971028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.158 [2024-12-06 19:26:25.971058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.158 [2024-12-06 19:26:25.971073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.158 [2024-12-06 19:26:25.971265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.158 [2024-12-06 19:26:25.971459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.158 [2024-12-06 19:26:25.971478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.158 [2024-12-06 19:26:25.971491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.158 [2024-12-06 19:26:25.971503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.158 [2024-12-06 19:26:25.983894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.158 [2024-12-06 19:26:25.984279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.158 [2024-12-06 19:26:25.984305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.158 [2024-12-06 19:26:25.984320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.158 [2024-12-06 19:26:25.984511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.158 [2024-12-06 19:26:25.984734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.158 [2024-12-06 19:26:25.984755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.158 [2024-12-06 19:26:25.984784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.158 [2024-12-06 19:26:25.984798] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.158 [2024-12-06 19:26:25.997230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.158 [2024-12-06 19:26:25.997647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.158 [2024-12-06 19:26:25.997672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.158 [2024-12-06 19:26:25.997687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.158 [2024-12-06 19:26:25.997915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.158 [2024-12-06 19:26:25.998129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.158 [2024-12-06 19:26:25.998150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.158 [2024-12-06 19:26:25.998163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.158 [2024-12-06 19:26:25.998175] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.158 [2024-12-06 19:26:26.010562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.158 [2024-12-06 19:26:26.011021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.158 [2024-12-06 19:26:26.011047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.158 [2024-12-06 19:26:26.011062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.158 [2024-12-06 19:26:26.011257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.158 [2024-12-06 19:26:26.011452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.158 [2024-12-06 19:26:26.011483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.158 [2024-12-06 19:26:26.011496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.158 [2024-12-06 19:26:26.011508] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.158 [2024-12-06 19:26:26.023865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.158 [2024-12-06 19:26:26.024291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.158 [2024-12-06 19:26:26.024317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.158 [2024-12-06 19:26:26.024332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.158 [2024-12-06 19:26:26.024523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.158 [2024-12-06 19:26:26.024749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.158 [2024-12-06 19:26:26.024785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.158 [2024-12-06 19:26:26.024800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.158 [2024-12-06 19:26:26.024813] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.158 [2024-12-06 19:26:26.037180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.158 [2024-12-06 19:26:26.037550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.158 [2024-12-06 19:26:26.037576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.158 [2024-12-06 19:26:26.037591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.158 [2024-12-06 19:26:26.037828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.158 [2024-12-06 19:26:26.038051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.158 [2024-12-06 19:26:26.038072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.158 [2024-12-06 19:26:26.038100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.158 [2024-12-06 19:26:26.038113] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.158 [2024-12-06 19:26:26.050312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.158 [2024-12-06 19:26:26.050693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.158 [2024-12-06 19:26:26.050718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.158 [2024-12-06 19:26:26.050763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.158 [2024-12-06 19:26:26.050966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.158 [2024-12-06 19:26:26.051181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.158 [2024-12-06 19:26:26.051202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.159 [2024-12-06 19:26:26.051219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.159 [2024-12-06 19:26:26.051232] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.159 [2024-12-06 19:26:26.063568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.159 [2024-12-06 19:26:26.063992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.159 [2024-12-06 19:26:26.064019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.159 [2024-12-06 19:26:26.064049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.159 [2024-12-06 19:26:26.064240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.159 [2024-12-06 19:26:26.064435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.159 [2024-12-06 19:26:26.064455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.159 [2024-12-06 19:26:26.064468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.159 [2024-12-06 19:26:26.064480] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.159 [2024-12-06 19:26:26.076816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.159 [2024-12-06 19:26:26.077256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.159 [2024-12-06 19:26:26.077282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.159 [2024-12-06 19:26:26.077296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.159 [2024-12-06 19:26:26.077486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.159 [2024-12-06 19:26:26.077681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.159 [2024-12-06 19:26:26.077701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.159 [2024-12-06 19:26:26.077714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.159 [2024-12-06 19:26:26.077751] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.159 [2024-12-06 19:26:26.090126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.159 [2024-12-06 19:26:26.090506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.159 [2024-12-06 19:26:26.090532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.159 [2024-12-06 19:26:26.090547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.159 [2024-12-06 19:26:26.090764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.159 [2024-12-06 19:26:26.090989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.159 [2024-12-06 19:26:26.091012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.159 [2024-12-06 19:26:26.091025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.159 [2024-12-06 19:26:26.091038] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.159 [2024-12-06 19:26:26.103306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.159 [2024-12-06 19:26:26.103688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.159 [2024-12-06 19:26:26.103714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.159 [2024-12-06 19:26:26.103752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.159 [2024-12-06 19:26:26.103957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.159 [2024-12-06 19:26:26.104190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.159 [2024-12-06 19:26:26.104211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.159 [2024-12-06 19:26:26.104224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.159 [2024-12-06 19:26:26.104236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.159 [2024-12-06 19:26:26.116558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.159 [2024-12-06 19:26:26.116966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.159 [2024-12-06 19:26:26.116996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.159 [2024-12-06 19:26:26.117012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.159 [2024-12-06 19:26:26.117277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.159 [2024-12-06 19:26:26.117510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.159 [2024-12-06 19:26:26.117536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.159 [2024-12-06 19:26:26.117551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.159 [2024-12-06 19:26:26.117569] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.159 [2024-12-06 19:26:26.129974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.159 [2024-12-06 19:26:26.130402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.159 [2024-12-06 19:26:26.130428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.159 [2024-12-06 19:26:26.130442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.159 [2024-12-06 19:26:26.130634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.159 [2024-12-06 19:26:26.130866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.159 [2024-12-06 19:26:26.130887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.159 [2024-12-06 19:26:26.130901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.159 [2024-12-06 19:26:26.130914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.159 [2024-12-06 19:26:26.143427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.159 [2024-12-06 19:26:26.143848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.159 [2024-12-06 19:26:26.143881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.159 [2024-12-06 19:26:26.143898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.159 [2024-12-06 19:26:26.144129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.159 [2024-12-06 19:26:26.144324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.159 [2024-12-06 19:26:26.144344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.159 [2024-12-06 19:26:26.144357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.159 [2024-12-06 19:26:26.144369] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.159 [2024-12-06 19:26:26.156857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.159 [2024-12-06 19:26:26.157305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.159 [2024-12-06 19:26:26.157331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.159 [2024-12-06 19:26:26.157345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.159 [2024-12-06 19:26:26.157536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.159 [2024-12-06 19:26:26.157760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.159 [2024-12-06 19:26:26.157798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.159 [2024-12-06 19:26:26.157813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.159 [2024-12-06 19:26:26.157826] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.159 [2024-12-06 19:26:26.170233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.159 [2024-12-06 19:26:26.170655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.159 [2024-12-06 19:26:26.170681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.159 [2024-12-06 19:26:26.170696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.159 [2024-12-06 19:26:26.170935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.159 [2024-12-06 19:26:26.171156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.159 [2024-12-06 19:26:26.171176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.159 [2024-12-06 19:26:26.171189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.159 [2024-12-06 19:26:26.171201] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.159 [2024-12-06 19:26:26.183477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.159 [2024-12-06 19:26:26.183846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.159 [2024-12-06 19:26:26.183874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.159 [2024-12-06 19:26:26.183890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.159 [2024-12-06 19:26:26.184101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.159 [2024-12-06 19:26:26.184305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.159 [2024-12-06 19:26:26.184326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.159 [2024-12-06 19:26:26.184339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.159 [2024-12-06 19:26:26.184352] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.159 [2024-12-06 19:26:26.196836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.159 [2024-12-06 19:26:26.197220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.159 [2024-12-06 19:26:26.197245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.159 [2024-12-06 19:26:26.197260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.159 [2024-12-06 19:26:26.197451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.160 [2024-12-06 19:26:26.197646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.160 [2024-12-06 19:26:26.197664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.160 [2024-12-06 19:26:26.197677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.160 [2024-12-06 19:26:26.197689] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.421 [2024-12-06 19:26:26.210206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.421 [2024-12-06 19:26:26.210645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.421 [2024-12-06 19:26:26.210672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.421 [2024-12-06 19:26:26.210687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.421 [2024-12-06 19:26:26.210915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.421 [2024-12-06 19:26:26.211136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.421 [2024-12-06 19:26:26.211158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.421 [2024-12-06 19:26:26.211171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.421 [2024-12-06 19:26:26.211183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.421 [2024-12-06 19:26:26.223508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.421 [2024-12-06 19:26:26.223935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.421 [2024-12-06 19:26:26.223962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.421 [2024-12-06 19:26:26.223977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.421 [2024-12-06 19:26:26.224185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.421 [2024-12-06 19:26:26.224380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.421 [2024-12-06 19:26:26.224401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.421 [2024-12-06 19:26:26.224419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.421 [2024-12-06 19:26:26.224433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.421 [2024-12-06 19:26:26.236810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.421 [2024-12-06 19:26:26.237203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.422 [2024-12-06 19:26:26.237229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.422 [2024-12-06 19:26:26.237243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.422 [2024-12-06 19:26:26.237433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.422 [2024-12-06 19:26:26.237627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.422 [2024-12-06 19:26:26.237646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.422 [2024-12-06 19:26:26.237659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.422 [2024-12-06 19:26:26.237671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.422 [2024-12-06 19:26:26.250050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.422 [2024-12-06 19:26:26.250459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.422 [2024-12-06 19:26:26.250484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.422 [2024-12-06 19:26:26.250499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.422 [2024-12-06 19:26:26.250689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.422 [2024-12-06 19:26:26.250935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.422 [2024-12-06 19:26:26.250957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.422 [2024-12-06 19:26:26.250971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.422 [2024-12-06 19:26:26.250985] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.422 4468.20 IOPS, 17.45 MiB/s [2024-12-06T18:26:26.471Z] [2024-12-06 19:26:26.263402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.422 [2024-12-06 19:26:26.263802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.422 [2024-12-06 19:26:26.263829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.422 [2024-12-06 19:26:26.263844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.422 [2024-12-06 19:26:26.264057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.422 [2024-12-06 19:26:26.264252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.422 [2024-12-06 19:26:26.264272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.422 [2024-12-06 19:26:26.264286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.422 [2024-12-06 19:26:26.264298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.422 [2024-12-06 19:26:26.276635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.422 [2024-12-06 19:26:26.277024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.422 [2024-12-06 19:26:26.277065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.422 [2024-12-06 19:26:26.277080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.422 [2024-12-06 19:26:26.277270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.422 [2024-12-06 19:26:26.277467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.422 [2024-12-06 19:26:26.277488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.422 [2024-12-06 19:26:26.277501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.422 [2024-12-06 19:26:26.277513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.422 [2024-12-06 19:26:26.290051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.422 [2024-12-06 19:26:26.290466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.422 [2024-12-06 19:26:26.290492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.422 [2024-12-06 19:26:26.290506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.422 [2024-12-06 19:26:26.290696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.422 [2024-12-06 19:26:26.290940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.422 [2024-12-06 19:26:26.290962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.422 [2024-12-06 19:26:26.290976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.422 [2024-12-06 19:26:26.290989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.422 [2024-12-06 19:26:26.303292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.422 [2024-12-06 19:26:26.303697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.422 [2024-12-06 19:26:26.303744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.422 [2024-12-06 19:26:26.303761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.422 [2024-12-06 19:26:26.303958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.422 [2024-12-06 19:26:26.304169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.422 [2024-12-06 19:26:26.304190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.422 [2024-12-06 19:26:26.304203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.422 [2024-12-06 19:26:26.304215] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.422 [2024-12-06 19:26:26.316611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.422 [2024-12-06 19:26:26.317007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.422 [2024-12-06 19:26:26.317034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.422 [2024-12-06 19:26:26.317070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.422 [2024-12-06 19:26:26.317263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.422 [2024-12-06 19:26:26.317458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.422 [2024-12-06 19:26:26.317478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.422 [2024-12-06 19:26:26.317491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.422 [2024-12-06 19:26:26.317503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.422 [2024-12-06 19:26:26.329941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.422 [2024-12-06 19:26:26.330310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.422 [2024-12-06 19:26:26.330337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.422 [2024-12-06 19:26:26.330352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.422 [2024-12-06 19:26:26.330544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.422 [2024-12-06 19:26:26.330780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.422 [2024-12-06 19:26:26.330810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.422 [2024-12-06 19:26:26.330823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.422 [2024-12-06 19:26:26.330836] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.422 [2024-12-06 19:26:26.343210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.422 [2024-12-06 19:26:26.343621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.422 [2024-12-06 19:26:26.343647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.422 [2024-12-06 19:26:26.343662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.422 [2024-12-06 19:26:26.343884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.422 [2024-12-06 19:26:26.344099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.422 [2024-12-06 19:26:26.344120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.422 [2024-12-06 19:26:26.344132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.422 [2024-12-06 19:26:26.344144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.422 [2024-12-06 19:26:26.356497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.422 [2024-12-06 19:26:26.356882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.422 [2024-12-06 19:26:26.356909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.422 [2024-12-06 19:26:26.356925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.423 [2024-12-06 19:26:26.357134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.423 [2024-12-06 19:26:26.357334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.423 [2024-12-06 19:26:26.357355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.423 [2024-12-06 19:26:26.357368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.423 [2024-12-06 19:26:26.357379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.423 [2024-12-06 19:26:26.369875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.423 [2024-12-06 19:26:26.370291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.423 [2024-12-06 19:26:26.370318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.423 [2024-12-06 19:26:26.370334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.423 [2024-12-06 19:26:26.370585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.423 [2024-12-06 19:26:26.370837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.423 [2024-12-06 19:26:26.370869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.423 [2024-12-06 19:26:26.370884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.423 [2024-12-06 19:26:26.370898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.423 [2024-12-06 19:26:26.383224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.423 [2024-12-06 19:26:26.383604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.423 [2024-12-06 19:26:26.383630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.423 [2024-12-06 19:26:26.383645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.423 [2024-12-06 19:26:26.383889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.423 [2024-12-06 19:26:26.384127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.423 [2024-12-06 19:26:26.384148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.423 [2024-12-06 19:26:26.384162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.423 [2024-12-06 19:26:26.384174] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.423 [2024-12-06 19:26:26.396534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.423 [2024-12-06 19:26:26.396960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.423 [2024-12-06 19:26:26.396988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.423 [2024-12-06 19:26:26.397003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.423 [2024-12-06 19:26:26.397210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.423 [2024-12-06 19:26:26.397405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.423 [2024-12-06 19:26:26.397424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.423 [2024-12-06 19:26:26.397442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.423 [2024-12-06 19:26:26.397455] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.423 [2024-12-06 19:26:26.409894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.423 [2024-12-06 19:26:26.410265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.423 [2024-12-06 19:26:26.410291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.423 [2024-12-06 19:26:26.410306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.423 [2024-12-06 19:26:26.410502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.423 [2024-12-06 19:26:26.410718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.423 [2024-12-06 19:26:26.410747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.423 [2024-12-06 19:26:26.410760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.423 [2024-12-06 19:26:26.410773] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.423 [2024-12-06 19:26:26.423364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.423 [2024-12-06 19:26:26.423755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.423 [2024-12-06 19:26:26.423782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.423 [2024-12-06 19:26:26.423797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.423 [2024-12-06 19:26:26.424015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.423 [2024-12-06 19:26:26.424217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.423 [2024-12-06 19:26:26.424236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.423 [2024-12-06 19:26:26.424248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.423 [2024-12-06 19:26:26.424260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.423 [2024-12-06 19:26:26.436649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.423 [2024-12-06 19:26:26.437131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.423 [2024-12-06 19:26:26.437155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.423 [2024-12-06 19:26:26.437169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.423 [2024-12-06 19:26:26.437380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.423 [2024-12-06 19:26:26.437580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.423 [2024-12-06 19:26:26.437600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.423 [2024-12-06 19:26:26.437612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.423 [2024-12-06 19:26:26.437624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.423 [2024-12-06 19:26:26.449931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.423 [2024-12-06 19:26:26.450326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.423 [2024-12-06 19:26:26.450358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.423 [2024-12-06 19:26:26.450387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.423 [2024-12-06 19:26:26.450583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.423 [2024-12-06 19:26:26.450812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.423 [2024-12-06 19:26:26.450832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.423 [2024-12-06 19:26:26.450845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.423 [2024-12-06 19:26:26.450857] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.423 [2024-12-06 19:26:26.463278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.423 [2024-12-06 19:26:26.463698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.423 [2024-12-06 19:26:26.463745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.423 [2024-12-06 19:26:26.463760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.423 [2024-12-06 19:26:26.463977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.423 [2024-12-06 19:26:26.464196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.423 [2024-12-06 19:26:26.464215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.423 [2024-12-06 19:26:26.464228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.423 [2024-12-06 19:26:26.464239] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.684 [2024-12-06 19:26:26.476574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.684 [2024-12-06 19:26:26.477036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.684 [2024-12-06 19:26:26.477075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.684 [2024-12-06 19:26:26.477090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.684 [2024-12-06 19:26:26.477286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.684 [2024-12-06 19:26:26.477486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.684 [2024-12-06 19:26:26.477505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.684 [2024-12-06 19:26:26.477518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.684 [2024-12-06 19:26:26.477529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.684 [2024-12-06 19:26:26.489898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.684 [2024-12-06 19:26:26.490333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.684 [2024-12-06 19:26:26.490373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.684 [2024-12-06 19:26:26.490392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.684 [2024-12-06 19:26:26.490590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.684 [2024-12-06 19:26:26.490818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.684 [2024-12-06 19:26:26.490839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.684 [2024-12-06 19:26:26.490852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.684 [2024-12-06 19:26:26.490864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.684 [2024-12-06 19:26:26.503192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.684 [2024-12-06 19:26:26.503627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.684 [2024-12-06 19:26:26.503665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.684 [2024-12-06 19:26:26.503680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.684 [2024-12-06 19:26:26.503923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.684 [2024-12-06 19:26:26.504151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.684 [2024-12-06 19:26:26.504185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.684 [2024-12-06 19:26:26.504197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.684 [2024-12-06 19:26:26.504209] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.684 [2024-12-06 19:26:26.516550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.684 [2024-12-06 19:26:26.516991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.684 [2024-12-06 19:26:26.517031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.684 [2024-12-06 19:26:26.517046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.684 [2024-12-06 19:26:26.517242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.684 [2024-12-06 19:26:26.517442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.684 [2024-12-06 19:26:26.517461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.684 [2024-12-06 19:26:26.517473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.684 [2024-12-06 19:26:26.517485] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.684 [2024-12-06 19:26:26.529866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.684 [2024-12-06 19:26:26.530312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.684 [2024-12-06 19:26:26.530351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.684 [2024-12-06 19:26:26.530366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.684 [2024-12-06 19:26:26.530562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.684 [2024-12-06 19:26:26.530794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.684 [2024-12-06 19:26:26.530815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.684 [2024-12-06 19:26:26.530828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.684 [2024-12-06 19:26:26.530840] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.684 [2024-12-06 19:26:26.543157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.684 [2024-12-06 19:26:26.543599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.684 [2024-12-06 19:26:26.543624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.684 [2024-12-06 19:26:26.543653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.684 [2024-12-06 19:26:26.543897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.684 [2024-12-06 19:26:26.544126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.684 [2024-12-06 19:26:26.544146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.684 [2024-12-06 19:26:26.544158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.684 [2024-12-06 19:26:26.544169] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.684 [2024-12-06 19:26:26.556516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.684 [2024-12-06 19:26:26.556957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.684 [2024-12-06 19:26:26.556983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.684 [2024-12-06 19:26:26.556997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.684 [2024-12-06 19:26:26.557193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.684 [2024-12-06 19:26:26.557393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.684 [2024-12-06 19:26:26.557412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.684 [2024-12-06 19:26:26.557424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.684 [2024-12-06 19:26:26.557435] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.684 [2024-12-06 19:26:26.569813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.684 [2024-12-06 19:26:26.570240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.684 [2024-12-06 19:26:26.570278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.684 [2024-12-06 19:26:26.570293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.684 [2024-12-06 19:26:26.570489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.684 [2024-12-06 19:26:26.570689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.684 [2024-12-06 19:26:26.570708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.684 [2024-12-06 19:26:26.570748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.684 [2024-12-06 19:26:26.570764] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.684 [2024-12-06 19:26:26.583188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.684 [2024-12-06 19:26:26.583637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.684 [2024-12-06 19:26:26.583677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.684 [2024-12-06 19:26:26.583692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.684 [2024-12-06 19:26:26.583922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.684 [2024-12-06 19:26:26.584145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.684 [2024-12-06 19:26:26.584164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.684 [2024-12-06 19:26:26.584176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.685 [2024-12-06 19:26:26.584188] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.685 [2024-12-06 19:26:26.596576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.685 [2024-12-06 19:26:26.596926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.685 [2024-12-06 19:26:26.596952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.685 [2024-12-06 19:26:26.596966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.685 [2024-12-06 19:26:26.597161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.685 [2024-12-06 19:26:26.597362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.685 [2024-12-06 19:26:26.597380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.685 [2024-12-06 19:26:26.597392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.685 [2024-12-06 19:26:26.597404] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.685 [2024-12-06 19:26:26.609857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.685 [2024-12-06 19:26:26.610244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.685 [2024-12-06 19:26:26.610293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.685 [2024-12-06 19:26:26.610307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.685 [2024-12-06 19:26:26.610511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.685 [2024-12-06 19:26:26.610730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.685 [2024-12-06 19:26:26.610750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.685 [2024-12-06 19:26:26.610763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.685 [2024-12-06 19:26:26.610776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.685 [2024-12-06 19:26:26.623181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.685 [2024-12-06 19:26:26.623556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.685 [2024-12-06 19:26:26.623583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.685 [2024-12-06 19:26:26.623598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.685 [2024-12-06 19:26:26.623829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.685 [2024-12-06 19:26:26.624084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.685 [2024-12-06 19:26:26.624106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.685 [2024-12-06 19:26:26.624119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.685 [2024-12-06 19:26:26.624132] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.685 [2024-12-06 19:26:26.636759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.685 [2024-12-06 19:26:26.637156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.685 [2024-12-06 19:26:26.637181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.685 [2024-12-06 19:26:26.637208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.685 [2024-12-06 19:26:26.637405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.685 [2024-12-06 19:26:26.637605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.685 [2024-12-06 19:26:26.637624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.685 [2024-12-06 19:26:26.637636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.685 [2024-12-06 19:26:26.637648] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.685 [2024-12-06 19:26:26.649961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.685 [2024-12-06 19:26:26.650327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.685 [2024-12-06 19:26:26.650372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.685 [2024-12-06 19:26:26.650386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.685 [2024-12-06 19:26:26.650590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.685 [2024-12-06 19:26:26.650813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.685 [2024-12-06 19:26:26.650833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.685 [2024-12-06 19:26:26.650845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.685 [2024-12-06 19:26:26.650857] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.685 [2024-12-06 19:26:26.663135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.685 [2024-12-06 19:26:26.663530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.685 [2024-12-06 19:26:26.663584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.685 [2024-12-06 19:26:26.663602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.685 [2024-12-06 19:26:26.663835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.685 [2024-12-06 19:26:26.664052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.685 [2024-12-06 19:26:26.664070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.685 [2024-12-06 19:26:26.664082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.685 [2024-12-06 19:26:26.664093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.685 [2024-12-06 19:26:26.676321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.685 [2024-12-06 19:26:26.676678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.685 [2024-12-06 19:26:26.676703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.685 [2024-12-06 19:26:26.676717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.685 [2024-12-06 19:26:26.676935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.685 [2024-12-06 19:26:26.677148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.685 [2024-12-06 19:26:26.677166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.685 [2024-12-06 19:26:26.677178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.685 [2024-12-06 19:26:26.677189] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.685 [2024-12-06 19:26:26.689601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.685 [2024-12-06 19:26:26.689941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.685 [2024-12-06 19:26:26.689993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.685 [2024-12-06 19:26:26.690007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.685 [2024-12-06 19:26:26.690196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.685 [2024-12-06 19:26:26.690391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.685 [2024-12-06 19:26:26.690409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.685 [2024-12-06 19:26:26.690421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.685 [2024-12-06 19:26:26.690432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.685 [2024-12-06 19:26:26.702883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.685 [2024-12-06 19:26:26.703223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.685 [2024-12-06 19:26:26.703261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.685 [2024-12-06 19:26:26.703275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.685 [2024-12-06 19:26:26.703479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.685 [2024-12-06 19:26:26.703678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.685 [2024-12-06 19:26:26.703697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.685 [2024-12-06 19:26:26.703709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.685 [2024-12-06 19:26:26.703729] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.685 [2024-12-06 19:26:26.716279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.685 [2024-12-06 19:26:26.716641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.685 [2024-12-06 19:26:26.716693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.685 [2024-12-06 19:26:26.716707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.685 [2024-12-06 19:26:26.716925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.685 [2024-12-06 19:26:26.717139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.685 [2024-12-06 19:26:26.717158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.686 [2024-12-06 19:26:26.717170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.686 [2024-12-06 19:26:26.717181] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.686 [2024-12-06 19:26:26.729459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.686 [2024-12-06 19:26:26.729754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.686 [2024-12-06 19:26:26.729779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.686 [2024-12-06 19:26:26.729793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.686 [2024-12-06 19:26:26.729984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.686 [2024-12-06 19:26:26.730178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.686 [2024-12-06 19:26:26.730197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.686 [2024-12-06 19:26:26.730209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.686 [2024-12-06 19:26:26.730220] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.946 [2024-12-06 19:26:26.742711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.946 [2024-12-06 19:26:26.743102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.946 [2024-12-06 19:26:26.743155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.946 [2024-12-06 19:26:26.743168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.946 [2024-12-06 19:26:26.743374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.946 [2024-12-06 19:26:26.743568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.946 [2024-12-06 19:26:26.743587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.946 [2024-12-06 19:26:26.743604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.946 [2024-12-06 19:26:26.743616] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.946 [2024-12-06 19:26:26.755878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.946 [2024-12-06 19:26:26.756322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.946 [2024-12-06 19:26:26.756360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.946 [2024-12-06 19:26:26.756375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.946 [2024-12-06 19:26:26.756566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.946 [2024-12-06 19:26:26.756770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.946 [2024-12-06 19:26:26.756800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.946 [2024-12-06 19:26:26.756812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.946 [2024-12-06 19:26:26.756823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.946 [2024-12-06 19:26:26.769071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.946 [2024-12-06 19:26:26.769523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.946 [2024-12-06 19:26:26.769571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.946 [2024-12-06 19:26:26.769585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.946 [2024-12-06 19:26:26.769815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.946 [2024-12-06 19:26:26.770016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.946 [2024-12-06 19:26:26.770035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.946 [2024-12-06 19:26:26.770061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.946 [2024-12-06 19:26:26.770073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.946 [2024-12-06 19:26:26.782232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.946 [2024-12-06 19:26:26.782667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.946 [2024-12-06 19:26:26.782716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.946 [2024-12-06 19:26:26.782739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.946 [2024-12-06 19:26:26.782949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.946 [2024-12-06 19:26:26.783162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.946 [2024-12-06 19:26:26.783181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.946 [2024-12-06 19:26:26.783192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.946 [2024-12-06 19:26:26.783203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.946 [2024-12-06 19:26:26.795270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.946 [2024-12-06 19:26:26.795743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.946 [2024-12-06 19:26:26.795784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.946 [2024-12-06 19:26:26.795798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.946 [2024-12-06 19:26:26.796002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.946 [2024-12-06 19:26:26.796196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.946 [2024-12-06 19:26:26.796215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.946 [2024-12-06 19:26:26.796226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.946 [2024-12-06 19:26:26.796238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.946 [2024-12-06 19:26:26.808293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.946 [2024-12-06 19:26:26.808693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.946 [2024-12-06 19:26:26.808717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.946 [2024-12-06 19:26:26.808755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.946 [2024-12-06 19:26:26.808958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.946 [2024-12-06 19:26:26.809172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.946 [2024-12-06 19:26:26.809191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.946 [2024-12-06 19:26:26.809203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.946 [2024-12-06 19:26:26.809214] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.946 [2024-12-06 19:26:26.821525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.946 [2024-12-06 19:26:26.821969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.946 [2024-12-06 19:26:26.822008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.946 [2024-12-06 19:26:26.822022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.946 [2024-12-06 19:26:26.822212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.946 [2024-12-06 19:26:26.822406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.947 [2024-12-06 19:26:26.822424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.947 [2024-12-06 19:26:26.822436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.947 [2024-12-06 19:26:26.822447] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.947 [2024-12-06 19:26:26.834755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.947 [2024-12-06 19:26:26.835185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.947 [2024-12-06 19:26:26.835209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.947 [2024-12-06 19:26:26.835246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.947 [2024-12-06 19:26:26.835438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.947 [2024-12-06 19:26:26.835632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.947 [2024-12-06 19:26:26.835650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.947 [2024-12-06 19:26:26.835662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.947 [2024-12-06 19:26:26.835673] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 325814 Killed "${NVMF_APP[@]}" "$@" 00:27:41.947 19:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:41.947 19:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:41.947 19:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:41.947 19:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:41.947 19:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.947 19:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=326863 00:27:41.947 19:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:41.947 19:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 326863 00:27:41.947 19:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 326863 ']' 00:27:41.947 19:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.947 19:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.947 19:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.947 19:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.947 19:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.947 [2024-12-06 19:26:26.848132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.947 [2024-12-06 19:26:26.848496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.947 [2024-12-06 19:26:26.848534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.947 [2024-12-06 19:26:26.848548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.947 [2024-12-06 19:26:26.848796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.947 [2024-12-06 19:26:26.849031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.947 [2024-12-06 19:26:26.849052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.947 [2024-12-06 19:26:26.849064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.947 [2024-12-06 19:26:26.849090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.947 [2024-12-06 19:26:26.861489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.947 [2024-12-06 19:26:26.861841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.947 [2024-12-06 19:26:26.861871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.947 [2024-12-06 19:26:26.861887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.947 [2024-12-06 19:26:26.862096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.947 [2024-12-06 19:26:26.862300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.947 [2024-12-06 19:26:26.862319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.947 [2024-12-06 19:26:26.862332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.947 [2024-12-06 19:26:26.862343] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.947 [2024-12-06 19:26:26.874694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.947 [2024-12-06 19:26:26.875181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.947 [2024-12-06 19:26:26.875207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.947 [2024-12-06 19:26:26.875221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.947 [2024-12-06 19:26:26.875456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.947 [2024-12-06 19:26:26.875677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.947 [2024-12-06 19:26:26.875698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.947 [2024-12-06 19:26:26.875734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.947 [2024-12-06 19:26:26.875749] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.947 [2024-12-06 19:26:26.887942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.947 [2024-12-06 19:26:26.888339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.947 [2024-12-06 19:26:26.888389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.947 [2024-12-06 19:26:26.888403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.947 [2024-12-06 19:26:26.888607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.947 [2024-12-06 19:26:26.888833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.947 [2024-12-06 19:26:26.888854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.947 [2024-12-06 19:26:26.888867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.947 [2024-12-06 19:26:26.888878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.947 [2024-12-06 19:26:26.898932] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:27:41.947 [2024-12-06 19:26:26.899020] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.947 [2024-12-06 19:26:26.901287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.947 [2024-12-06 19:26:26.901643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.947 [2024-12-06 19:26:26.901699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.947 [2024-12-06 19:26:26.901714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.947 [2024-12-06 19:26:26.901941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.947 [2024-12-06 19:26:26.902167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.947 [2024-12-06 19:26:26.902187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.947 [2024-12-06 19:26:26.902199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.947 [2024-12-06 19:26:26.902211] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.947 [2024-12-06 19:26:26.914900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.947 [2024-12-06 19:26:26.915310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.947 [2024-12-06 19:26:26.915349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.947 [2024-12-06 19:26:26.915364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.947 [2024-12-06 19:26:26.915572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.947 [2024-12-06 19:26:26.915802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.947 [2024-12-06 19:26:26.915823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.947 [2024-12-06 19:26:26.915836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.947 [2024-12-06 19:26:26.915848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.947 [2024-12-06 19:26:26.928397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.948 [2024-12-06 19:26:26.928806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.948 [2024-12-06 19:26:26.928833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.948 [2024-12-06 19:26:26.928848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.948 [2024-12-06 19:26:26.929066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.948 [2024-12-06 19:26:26.929267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.948 [2024-12-06 19:26:26.929287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.948 [2024-12-06 19:26:26.929299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.948 [2024-12-06 19:26:26.929311] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.948 [2024-12-06 19:26:26.941836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.948 [2024-12-06 19:26:26.942241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.948 [2024-12-06 19:26:26.942280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.948 [2024-12-06 19:26:26.942295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.948 [2024-12-06 19:26:26.942495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.948 [2024-12-06 19:26:26.942696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.948 [2024-12-06 19:26:26.942739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.948 [2024-12-06 19:26:26.942753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.948 [2024-12-06 19:26:26.942765] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.948 [2024-12-06 19:26:26.955233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.948 [2024-12-06 19:26:26.955674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.948 [2024-12-06 19:26:26.955712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.948 [2024-12-06 19:26:26.955735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.948 [2024-12-06 19:26:26.955953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.948 [2024-12-06 19:26:26.956172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.948 [2024-12-06 19:26:26.956205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.948 [2024-12-06 19:26:26.956218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.948 [2024-12-06 19:26:26.956230] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.948 [2024-12-06 19:26:26.968715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.948 [2024-12-06 19:26:26.969188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.948 [2024-12-06 19:26:26.969228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.948 [2024-12-06 19:26:26.969243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.948 [2024-12-06 19:26:26.969440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.948 [2024-12-06 19:26:26.969640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.948 [2024-12-06 19:26:26.969659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.948 [2024-12-06 19:26:26.969671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.948 [2024-12-06 19:26:26.969683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:41.948 [2024-12-06 19:26:26.975801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:41.948 [2024-12-06 19:26:26.982180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:41.948 [2024-12-06 19:26:26.982650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.948 [2024-12-06 19:26:26.982678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:41.948 [2024-12-06 19:26:26.982694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:41.948 [2024-12-06 19:26:26.982942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:41.948 [2024-12-06 19:26:26.983193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:41.948 [2024-12-06 19:26:26.983219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:41.948 [2024-12-06 19:26:26.983234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:41.948 [2024-12-06 19:26:26.983246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.209 [2024-12-06 19:26:26.995680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.209 [2024-12-06 19:26:26.996146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.209 [2024-12-06 19:26:26.996190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.209 [2024-12-06 19:26:26.996209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.209 [2024-12-06 19:26:26.996425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.209 [2024-12-06 19:26:26.996641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.209 [2024-12-06 19:26:26.996662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.209 [2024-12-06 19:26:26.996677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.209 [2024-12-06 19:26:26.996690] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.209 [2024-12-06 19:26:27.009098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.209 [2024-12-06 19:26:27.009482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.209 [2024-12-06 19:26:27.009522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.209 [2024-12-06 19:26:27.009538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.209 [2024-12-06 19:26:27.009784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.209 [2024-12-06 19:26:27.009993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.209 [2024-12-06 19:26:27.010013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.209 [2024-12-06 19:26:27.010026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.209 [2024-12-06 19:26:27.010038] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.209 [2024-12-06 19:26:27.022313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.209 [2024-12-06 19:26:27.022825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.209 [2024-12-06 19:26:27.022867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.209 [2024-12-06 19:26:27.022883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.209 [2024-12-06 19:26:27.023100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.209 [2024-12-06 19:26:27.023301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.209 [2024-12-06 19:26:27.023320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.209 [2024-12-06 19:26:27.023333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.209 [2024-12-06 19:26:27.023356] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.209 [2024-12-06 19:26:27.033963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.209 [2024-12-06 19:26:27.033997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.209 [2024-12-06 19:26:27.034026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.209 [2024-12-06 19:26:27.034038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.209 [2024-12-06 19:26:27.034048] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.209 [2024-12-06 19:26:27.035512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:42.209 [2024-12-06 19:26:27.035577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:42.209 [2024-12-06 19:26:27.035580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.209 [2024-12-06 19:26:27.035680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.209 [2024-12-06 19:26:27.036119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.209 [2024-12-06 19:26:27.036147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.209 [2024-12-06 19:26:27.036162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.209 [2024-12-06 19:26:27.036373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.209 [2024-12-06 19:26:27.036597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.209 [2024-12-06 19:26:27.036617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.209 [2024-12-06 19:26:27.036631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.209 [2024-12-06 19:26:27.036643] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.209 [2024-12-06 19:26:27.049233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.209 [2024-12-06 19:26:27.049884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.209 [2024-12-06 19:26:27.049924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.209 [2024-12-06 19:26:27.049943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.209 [2024-12-06 19:26:27.050173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.209 [2024-12-06 19:26:27.050402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.209 [2024-12-06 19:26:27.050423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.209 [2024-12-06 19:26:27.050439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.209 [2024-12-06 19:26:27.050456] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.209 [2024-12-06 19:26:27.062769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.209 [2024-12-06 19:26:27.063364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.209 [2024-12-06 19:26:27.063418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.209 [2024-12-06 19:26:27.063438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.209 [2024-12-06 19:26:27.063668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.209 [2024-12-06 19:26:27.063928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.209 [2024-12-06 19:26:27.063950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.209 [2024-12-06 19:26:27.063968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.209 [2024-12-06 19:26:27.063984] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.209 [2024-12-06 19:26:27.076334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.209 [2024-12-06 19:26:27.076839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.209 [2024-12-06 19:26:27.076879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.209 [2024-12-06 19:26:27.076899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.210 [2024-12-06 19:26:27.077118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.210 [2024-12-06 19:26:27.077347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.210 [2024-12-06 19:26:27.077368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.210 [2024-12-06 19:26:27.077385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.210 [2024-12-06 19:26:27.077401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.210 [2024-12-06 19:26:27.089859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.210 [2024-12-06 19:26:27.090342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.210 [2024-12-06 19:26:27.090391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.210 [2024-12-06 19:26:27.090409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.210 [2024-12-06 19:26:27.090642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.210 [2024-12-06 19:26:27.090882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.210 [2024-12-06 19:26:27.090903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.210 [2024-12-06 19:26:27.090919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.210 [2024-12-06 19:26:27.090934] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.210 [2024-12-06 19:26:27.103468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.210 [2024-12-06 19:26:27.104053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.210 [2024-12-06 19:26:27.104092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.210 [2024-12-06 19:26:27.104111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.210 [2024-12-06 19:26:27.104344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.210 [2024-12-06 19:26:27.104563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.210 [2024-12-06 19:26:27.104594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.210 [2024-12-06 19:26:27.104611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.210 [2024-12-06 19:26:27.104627] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.210 [2024-12-06 19:26:27.117092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.210 [2024-12-06 19:26:27.117673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.210 [2024-12-06 19:26:27.117709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.210 [2024-12-06 19:26:27.117752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.210 [2024-12-06 19:26:27.117994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.210 [2024-12-06 19:26:27.118232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.210 [2024-12-06 19:26:27.118253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.210 [2024-12-06 19:26:27.118270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.210 [2024-12-06 19:26:27.118285] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.210 [2024-12-06 19:26:27.130581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.210 [2024-12-06 19:26:27.131059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.210 [2024-12-06 19:26:27.131088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.210 [2024-12-06 19:26:27.131104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.210 [2024-12-06 19:26:27.131320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.210 [2024-12-06 19:26:27.131550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.210 [2024-12-06 19:26:27.131572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.210 [2024-12-06 19:26:27.131587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.210 [2024-12-06 19:26:27.131600] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.210 [2024-12-06 19:26:27.144226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.210 [2024-12-06 19:26:27.144708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.210 [2024-12-06 19:26:27.144759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.210 [2024-12-06 19:26:27.144776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.210 [2024-12-06 19:26:27.144992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.210 [2024-12-06 19:26:27.145213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.210 [2024-12-06 19:26:27.145234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.210 [2024-12-06 19:26:27.145248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.210 [2024-12-06 19:26:27.145261] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.210 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:42.210 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:42.210 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:42.210 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:42.210 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.210 [2024-12-06 19:26:27.157841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.210 [2024-12-06 19:26:27.158272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.210 [2024-12-06 19:26:27.158314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.210 [2024-12-06 19:26:27.158329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.210 [2024-12-06 19:26:27.158539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.210 [2024-12-06 19:26:27.158781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.210 [2024-12-06 19:26:27.158802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.210 [2024-12-06 19:26:27.158816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.210 [2024-12-06 19:26:27.158828] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.210 [2024-12-06 19:26:27.171384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.210 [2024-12-06 19:26:27.171816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.210 [2024-12-06 19:26:27.171845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.210 [2024-12-06 19:26:27.171861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.210 [2024-12-06 19:26:27.172077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.210 [2024-12-06 19:26:27.172299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.210 [2024-12-06 19:26:27.172320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.210 [2024-12-06 19:26:27.172333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.210 [2024-12-06 19:26:27.172346] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.211 [2024-12-06 19:26:27.182986] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.211 [2024-12-06 19:26:27.185067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.211 [2024-12-06 19:26:27.185460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.211 [2024-12-06 19:26:27.185511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.211 [2024-12-06 19:26:27.185526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.211 [2024-12-06 19:26:27.185790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.211 [2024-12-06 19:26:27.186012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.211 [2024-12-06 19:26:27.186047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.211 [2024-12-06 19:26:27.186061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.211 [2024-12-06 19:26:27.186073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.211 [2024-12-06 19:26:27.198614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.211 [2024-12-06 19:26:27.199093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.211 [2024-12-06 19:26:27.199124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.211 [2024-12-06 19:26:27.199141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.211 [2024-12-06 19:26:27.199353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.211 [2024-12-06 19:26:27.199575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.211 [2024-12-06 19:26:27.199595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.211 [2024-12-06 19:26:27.199611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.211 [2024-12-06 19:26:27.199625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.211 [2024-12-06 19:26:27.212086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.211 [2024-12-06 19:26:27.212516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.211 [2024-12-06 19:26:27.212542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.211 [2024-12-06 19:26:27.212557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.211 [2024-12-06 19:26:27.212817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.211 [2024-12-06 19:26:27.213053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.211 [2024-12-06 19:26:27.213088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.211 [2024-12-06 19:26:27.213101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.211 [2024-12-06 19:26:27.213114] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.211 [2024-12-06 19:26:27.225572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.211 [2024-12-06 19:26:27.226118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.211 [2024-12-06 19:26:27.226168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.211 [2024-12-06 19:26:27.226186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.211 [2024-12-06 19:26:27.226427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.211 [2024-12-06 19:26:27.226647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.211 [2024-12-06 19:26:27.226668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.211 [2024-12-06 19:26:27.226685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.211 [2024-12-06 19:26:27.226715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.211 Malloc0 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.211 [2024-12-06 19:26:27.239388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.211 [2024-12-06 19:26:27.239823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:42.211 [2024-12-06 19:26:27.239852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80fc60 with addr=10.0.0.2, port=4420 00:27:42.211 [2024-12-06 19:26:27.239868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fc60 is same with the state(6) to be set 00:27:42.211 [2024-12-06 19:26:27.240085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fc60 (9): Bad file descriptor 00:27:42.211 [2024-12-06 19:26:27.240315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:42.211 [2024-12-06 19:26:27.240336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:42.211 [2024-12-06 19:26:27.240350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:42.211 [2024-12-06 19:26:27.240362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.211 [2024-12-06 19:26:27.250430] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.211 [2024-12-06 19:26:27.252984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.211 19:26:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 326082 00:27:42.472 3723.50 IOPS, 14.54 MiB/s [2024-12-06T18:26:27.521Z] [2024-12-06 19:26:27.281605] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:44.347 4397.14 IOPS, 17.18 MiB/s [2024-12-06T18:26:30.331Z] 4943.00 IOPS, 19.31 MiB/s [2024-12-06T18:26:31.710Z] 5365.67 IOPS, 20.96 MiB/s [2024-12-06T18:26:32.275Z] 5705.50 IOPS, 22.29 MiB/s [2024-12-06T18:26:33.651Z] 5991.91 IOPS, 23.41 MiB/s [2024-12-06T18:26:34.585Z] 6227.25 IOPS, 24.33 MiB/s [2024-12-06T18:26:35.525Z] 6429.62 IOPS, 25.12 MiB/s [2024-12-06T18:26:36.457Z] 6596.50 IOPS, 25.77 MiB/s [2024-12-06T18:26:36.457Z] 6733.93 IOPS, 26.30 MiB/s 00:27:51.408 Latency(us) 00:27:51.408 [2024-12-06T18:26:36.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.408 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:51.408 Verification LBA range: start 0x0 length 0x4000 00:27:51.408 Nvme1n1 : 15.01 6737.60 26.32 10017.92 0.00 7616.94 807.06 17379.18 00:27:51.408 [2024-12-06T18:26:36.457Z] =================================================================================================================== 00:27:51.408 [2024-12-06T18:26:36.457Z] Total : 6737.60 26.32 10017.92 0.00 7616.94 807.06 17379.18 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:51.667 rmmod nvme_tcp 00:27:51.667 rmmod nvme_fabrics 00:27:51.667 rmmod nvme_keyring 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 326863 ']' 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 326863 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 326863 ']' 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 326863 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 326863 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 326863' 00:27:51.667 killing process with pid 326863 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 326863 00:27:51.667 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 326863 00:27:51.926 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:51.926 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:51.926 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:51.926 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:51.926 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:51.926 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:51.926 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:51.926 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:51.926 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:51.926 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.926 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.926 19:26:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.459 19:26:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:54.459 00:27:54.459 real 0m22.680s 00:27:54.459 user 1m0.286s 00:27:54.459 sys 0m4.601s 00:27:54.459 19:26:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:54.459 19:26:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:54.459 ************************************ 00:27:54.459 END TEST nvmf_bdevperf 00:27:54.459 ************************************ 00:27:54.459 19:26:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:54.459 19:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:54.459 19:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:54.459 19:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.459 ************************************ 00:27:54.459 START TEST nvmf_target_disconnect 00:27:54.459 ************************************ 00:27:54.459 19:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:54.459 * Looking for test storage... 00:27:54.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:54.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.459 --rc genhtml_branch_coverage=1 00:27:54.459 --rc genhtml_function_coverage=1 00:27:54.459 --rc genhtml_legend=1 00:27:54.459 --rc geninfo_all_blocks=1 00:27:54.459 --rc geninfo_unexecuted_blocks=1 00:27:54.459 00:27:54.459 ' 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:54.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.459 --rc genhtml_branch_coverage=1 00:27:54.459 --rc genhtml_function_coverage=1 00:27:54.459 --rc genhtml_legend=1 00:27:54.459 --rc geninfo_all_blocks=1 00:27:54.459 --rc geninfo_unexecuted_blocks=1 00:27:54.459 00:27:54.459 ' 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:54.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.459 --rc genhtml_branch_coverage=1 00:27:54.459 --rc genhtml_function_coverage=1 00:27:54.459 --rc genhtml_legend=1 00:27:54.459 --rc geninfo_all_blocks=1 00:27:54.459 --rc geninfo_unexecuted_blocks=1 00:27:54.459 00:27:54.459 ' 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:54.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.459 --rc genhtml_branch_coverage=1 00:27:54.459 --rc genhtml_function_coverage=1 00:27:54.459 --rc genhtml_legend=1 00:27:54.459 --rc geninfo_all_blocks=1 00:27:54.459 --rc geninfo_unexecuted_blocks=1 00:27:54.459 00:27:54.459 ' 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.459 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:54.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:54.460 19:26:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:56.366 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:56.366 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:56.366 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:56.367 Found net devices under 0000:84:00.0: cvl_0_0 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:56.367 Found net devices under 0000:84:00.1: cvl_0_1 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:56.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:56.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:27:56.367 00:27:56.367 --- 10.0.0.2 ping statistics --- 00:27:56.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.367 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:56.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:56.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:27:56.367 00:27:56.367 --- 10.0.0.1 ping statistics --- 00:27:56.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.367 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:56.367 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:56.626 ************************************ 00:27:56.626 START TEST nvmf_target_disconnect_tc1 00:27:56.626 ************************************ 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:56.626 [2024-12-06 19:26:41.542751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.626 [2024-12-06 19:26:41.542849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117f570 with addr=10.0.0.2, port=4420 00:27:56.626 [2024-12-06 19:26:41.542884] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:56.626 [2024-12-06 19:26:41.542908] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:56.626 [2024-12-06 19:26:41.542922] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:56.626 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:56.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:56.626 Initializing NVMe Controllers 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:56.626 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:56.626 00:27:56.626 real 0m0.100s 00:27:56.626 user 0m0.049s 00:27:56.627 sys 0m0.048s 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:56.627 ************************************ 00:27:56.627 END TEST nvmf_target_disconnect_tc1 00:27:56.627 ************************************ 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:56.627 ************************************ 00:27:56.627 START TEST nvmf_target_disconnect_tc2 00:27:56.627 ************************************ 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=330035 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 330035 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 330035 ']' 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:56.627 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.627 [2024-12-06 19:26:41.656417] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:27:56.627 [2024-12-06 19:26:41.656506] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.886 [2024-12-06 19:26:41.729811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:56.886 [2024-12-06 19:26:41.788648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.886 [2024-12-06 19:26:41.788717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.886 [2024-12-06 19:26:41.788750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.886 [2024-12-06 19:26:41.788763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.886 [2024-12-06 19:26:41.788773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.886 [2024-12-06 19:26:41.790399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:56.886 [2024-12-06 19:26:41.790463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:56.886 [2024-12-06 19:26:41.790527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:56.886 [2024-12-06 19:26:41.790530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:56.886 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:56.886 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:56.886 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:56.886 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:56.886 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.144 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:57.144 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:57.144 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.144 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.144 Malloc0 00:27:57.144 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.144 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:57.144 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.144 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.144 [2024-12-06 19:26:41.987895] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.144 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.144 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:57.144 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.144 19:26:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.144 19:26:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.144 19:26:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:57.144 19:26:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.144 19:26:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.144 19:26:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.144 19:26:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:57.144 19:26:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.144 19:26:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.144 [2024-12-06 19:26:42.016201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.144 19:26:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.144 19:26:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:57.144 19:26:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.144 19:26:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.144 19:26:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.144 19:26:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=330068 00:27:57.144 19:26:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:57.145 19:26:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:59.058 19:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 330035 00:27:59.058 19:26:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 [2024-12-06 19:26:44.041640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Read completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 Write completed with error (sct=0, sc=8) 00:27:59.058 starting I/O failed 00:27:59.058 [2024-12-06 19:26:44.042043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.058 [2024-12-06 19:26:44.042224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.058 [2024-12-06 19:26:44.042251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.058 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.042432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.042455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.042555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.042580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.042702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.042753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.042857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.042884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.042978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.043004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.043163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.043200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.043333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.043356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.043497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.043521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.043608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.043632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.043790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.043818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.043945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.043972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.044127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.044150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.044281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.044305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.044454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.044478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.044634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.044658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.044777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.044804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.044911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.044937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.045080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.045104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.045241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.045279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.045418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.045442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.045583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.045607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.045752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.045779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.045868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.045894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.046056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.046095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.046248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.046275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.046420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.046444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.046580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.046604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.046747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.046773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.046897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.046923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.059 qpair failed and we were unable to recover it. 00:27:59.059 [2024-12-06 19:26:44.047048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.059 [2024-12-06 19:26:44.047072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.047183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.047208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.047326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.047350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.047520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.047544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.047674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.047713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.047998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.048025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.048156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.048179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.048288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.048312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.048456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.048480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.048652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.048676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.048793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.048820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.048939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.048965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.049127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.049151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.049242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.049266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.049403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.049427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.049559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.049582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.049730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.049757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.049912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.049938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.050058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.050096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.050235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.050260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.050398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.050421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.050555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.050579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.050677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.050734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.050860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.050886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.051040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.051065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.051179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.051203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.051344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.051368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.051492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.051516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.051644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.051667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.060 qpair failed and we were unable to recover it. 00:27:59.060 [2024-12-06 19:26:44.051755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.060 [2024-12-06 19:26:44.051780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.051943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.051968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.052100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.052138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.052265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.052303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.052404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.052428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.052598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.052622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.052737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.052762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.052895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.052920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.053051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.053075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.053214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.053251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.053349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.053373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.053508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.053532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.053646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.053670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.053813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.053838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.054000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.054038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.054186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.054209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.054366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.054389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.054520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.054558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.054687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.054739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.054829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.054853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.055019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.055042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.055160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.055185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.055339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.055362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.055510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.055534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.055665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.055688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.055838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.055862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.056001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.056026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.056160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.061 [2024-12-06 19:26:44.056183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.061 qpair failed and we were unable to recover it. 00:27:59.061 [2024-12-06 19:26:44.056318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.056342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.056477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.056501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.056674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.056698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.056903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.056927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.057032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.057074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.057221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.057245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.057361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.057385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.057529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.057553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.057738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.057763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.057903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.057928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.058066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.058090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.058282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.058305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.058447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.058470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.058600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.058623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.058745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.058769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.058904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.058929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.059054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.059078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.059213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.059251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.059411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.059436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.059591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.059630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.059797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.059821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.059956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.059980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.060117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.060141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.060311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.060334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.060465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.060489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.060624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.060648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.060787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.060811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.060986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.061023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.062 [2024-12-06 19:26:44.061159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.062 [2024-12-06 19:26:44.061197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.062 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.061344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.061383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.061532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.061570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.061735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.061760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.061897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.061922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.062060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.062103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.062249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.062272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.062402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.062426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.062545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.062568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.062687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.062711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.062862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.062886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.063017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.063041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.063167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.063190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.063366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.063404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.063548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.063571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.063738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.063763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.063938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.063977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.064081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.064106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.064277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.064303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.064423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.064447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.064580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.064604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.064745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.064770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.064908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.064947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.065040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.065063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.065226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.065250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.065365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.065388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.065526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.065549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.065689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.065737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.065887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.065912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.063 [2024-12-06 19:26:44.066082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.063 [2024-12-06 19:26:44.066122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.063 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.066279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.066302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.066395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.066419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.066533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.066560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.066698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.066728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.066891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.066916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.067040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.067079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.067218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.067245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.067352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.067376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.067511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.067535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.067669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.067693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.067834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.067858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.067990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.068029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.068147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.068171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.068289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.068313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.068460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.068484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.068617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.068641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.068742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.068767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.068909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.068951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.069081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.069124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.069272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.069309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.069467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.069491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.069577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.064 [2024-12-06 19:26:44.069601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.064 qpair failed and we were unable to recover it. 00:27:59.064 [2024-12-06 19:26:44.069735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.069773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.069890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.069934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.070046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.070073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.070210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.070233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.070362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.070386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.070503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.070527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.070668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.070692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.070878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.070903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.071049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.071077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.071208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.071232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.071406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.071430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.071559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.071583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.071694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.071741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.071906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.071930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.072067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.072104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.072284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.072307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.072437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.072476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.072601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.072625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.072752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.072776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.072937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.072961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.073102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.073144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.073281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.073320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.073432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.073456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.073592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.073616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.073749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.073774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.073918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.073942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.074126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.074149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.074294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.074317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.074439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.074463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.065 [2024-12-06 19:26:44.074630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.065 [2024-12-06 19:26:44.074669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.065 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.074811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.074856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.074976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.075017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.075131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.075155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.075300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.075324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.075458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.075482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.075612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.075636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.075749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.075773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.075893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.075939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.076093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.076116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.076271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.076295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.076425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.076449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.076560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.076584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.076771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.076797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.076895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.076927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.077045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.077069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.077197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.077220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.077328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.077351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.077466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.077490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.077648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.077679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.077822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.077847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.077959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.077984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.078151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.078175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.078284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.078308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.078468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.078492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.078599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.078623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.078769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.078794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.078932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.078971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.079062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.079086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.079230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.079254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.066 qpair failed and we were unable to recover it. 00:27:59.066 [2024-12-06 19:26:44.079375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.066 [2024-12-06 19:26:44.079399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.079569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.079593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.079714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.079745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.079881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.079906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.080070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.080107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.080206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.080244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.080382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.080406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.080539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.080564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.080706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.080737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.080876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.080901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.081021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.081045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.081208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.081232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.081380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.081403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.081580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.081603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.081747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.081774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.081876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.081911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.082089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.082139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.082279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.082303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.082435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.082459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.082594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.082618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.082738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.082763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.082922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.082946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.083067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.083114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.083193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.083218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.083358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.083382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.083505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.083529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.083701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.083746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.083851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.083876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.083988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.067 [2024-12-06 19:26:44.084026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.067 qpair failed and we were unable to recover it. 00:27:59.067 [2024-12-06 19:26:44.084163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.084187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.084307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.084331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.084468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.084492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.084612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.084636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.084754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.084779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.084922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.084946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.085077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.085100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.085238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.085262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.085399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.085423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.085607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.085631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.085779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.085830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.085959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.086011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.086148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.086186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.086316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.086340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.086449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.086477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.086626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.086650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.086761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.086787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.086913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.086938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.087113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.087159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.087324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.087348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.087456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.087480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.087590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.087614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.087739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.087764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.087907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.087931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.088072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.088119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.088250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.088288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.088427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.088451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.088592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.088616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.088756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.068 [2024-12-06 19:26:44.088796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.068 qpair failed and we were unable to recover it. 00:27:59.068 [2024-12-06 19:26:44.088943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.088992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.089140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.089163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.089333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.089356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.089526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.089549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.089718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.089750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.089895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.089942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.090071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.090115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.090256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.090304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.090414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.090438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.090568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.090591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.090703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.090741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.090860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.090885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.091014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.091038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.091157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.091181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.091343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.091367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.091534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.091557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.091668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.091692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.091842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.091867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.091989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.092027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.092139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.092163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.092279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.092303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.092444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.092467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.092653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.092676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.092848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.092873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.092961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.092985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.093154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.093193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.093359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.093383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.093537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.093573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.093672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.069 [2024-12-06 19:26:44.093712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.069 qpair failed and we were unable to recover it. 00:27:59.069 [2024-12-06 19:26:44.093852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.093915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.094088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.094152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.094269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.094320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.094450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.094474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.094639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.094663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.094811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.094858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.095021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.095078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.095219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.095275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.095404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.095428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.095560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.095583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.095699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.095731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.095877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.095902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.096027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.096051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.096185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.096222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.096368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.096392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.096493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.096517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.096624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.096648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.096777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.096802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.096908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.096932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.097059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.097083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.097211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.097234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.097368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.097392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.097500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.097524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.097655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.070 [2024-12-06 19:26:44.097680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.070 qpair failed and we were unable to recover it. 00:27:59.070 [2024-12-06 19:26:44.097794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.071 [2024-12-06 19:26:44.097824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.071 qpair failed and we were unable to recover it. 00:27:59.071 [2024-12-06 19:26:44.097967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.071 [2024-12-06 19:26:44.098008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.071 qpair failed and we were unable to recover it. 00:27:59.071 [2024-12-06 19:26:44.098142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.071 [2024-12-06 19:26:44.098166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.071 qpair failed and we were unable to recover it. 00:27:59.071 [2024-12-06 19:26:44.098268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.071 [2024-12-06 19:26:44.098293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.071 qpair failed and we were unable to recover it. 00:27:59.071 [2024-12-06 19:26:44.098383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.071 [2024-12-06 19:26:44.098407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.071 qpair failed and we were unable to recover it. 00:27:59.071 [2024-12-06 19:26:44.098543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.071 [2024-12-06 19:26:44.098567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.071 qpair failed and we were unable to recover it. 00:27:59.071 [2024-12-06 19:26:44.098664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.071 [2024-12-06 19:26:44.098688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.071 qpair failed and we were unable to recover it. 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Write completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Write completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Write completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Write completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Write completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Write completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Write completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Write completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Write completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Write completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Write completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Write completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 Read completed with error (sct=0, sc=8) 00:27:59.071 starting I/O failed 00:27:59.071 [2024-12-06 19:26:44.099017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:59.071 [2024-12-06 19:26:44.099145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.071 [2024-12-06 19:26:44.099190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.071 qpair failed and we were unable to recover it. 00:27:59.071 [2024-12-06 19:26:44.099303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.071 [2024-12-06 19:26:44.099345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.071 qpair failed and we were unable to recover it. 00:27:59.071 [2024-12-06 19:26:44.099459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.071 [2024-12-06 19:26:44.099484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.071 qpair failed and we were unable to recover it. 00:27:59.071 [2024-12-06 19:26:44.099596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.071 [2024-12-06 19:26:44.099621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.071 qpair failed and we were unable to recover it. 00:27:59.071 [2024-12-06 19:26:44.099757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.071 [2024-12-06 19:26:44.099784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.071 qpair failed and we were unable to recover it. 00:27:59.071 [2024-12-06 19:26:44.099911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.071 [2024-12-06 19:26:44.099961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.071 qpair failed and we were unable to recover it. 00:27:59.071 [2024-12-06 19:26:44.100057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.071 [2024-12-06 19:26:44.100118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.071 qpair failed and we were unable to recover it. 00:27:59.071 [2024-12-06 19:26:44.100277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.071 [2024-12-06 19:26:44.100336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.071 qpair failed and we were unable to recover it. 00:27:59.071 [2024-12-06 19:26:44.100422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.071 [2024-12-06 19:26:44.100446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.071 qpair failed and we were unable to recover it. 00:27:59.071 [2024-12-06 19:26:44.100550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.347 [2024-12-06 19:26:44.100574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.347 qpair failed and we were unable to recover it. 00:27:59.347 [2024-12-06 19:26:44.100676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.347 [2024-12-06 19:26:44.100701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.347 qpair failed and we were unable to recover it. 00:27:59.347 [2024-12-06 19:26:44.100825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.347 [2024-12-06 19:26:44.100852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.347 qpair failed and we were unable to recover it. 00:27:59.347 [2024-12-06 19:26:44.100966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.347 [2024-12-06 19:26:44.101006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.347 qpair failed and we were unable to recover it. 00:27:59.347 [2024-12-06 19:26:44.101132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.347 [2024-12-06 19:26:44.101162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.347 qpair failed and we were unable to recover it. 00:27:59.347 [2024-12-06 19:26:44.101307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.347 [2024-12-06 19:26:44.101332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.347 qpair failed and we were unable to recover it. 00:27:59.347 [2024-12-06 19:26:44.101463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.347 [2024-12-06 19:26:44.101487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.347 qpair failed and we were unable to recover it. 00:27:59.347 [2024-12-06 19:26:44.101618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.347 [2024-12-06 19:26:44.101643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.347 qpair failed and we were unable to recover it. 00:27:59.347 [2024-12-06 19:26:44.101774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.347 [2024-12-06 19:26:44.101801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.347 qpair failed and we were unable to recover it. 00:27:59.347 [2024-12-06 19:26:44.101924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.347 [2024-12-06 19:26:44.101949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.347 qpair failed and we were unable to recover it. 00:27:59.347 [2024-12-06 19:26:44.102072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.347 [2024-12-06 19:26:44.102097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.347 qpair failed and we were unable to recover it. 00:27:59.347 [2024-12-06 19:26:44.102214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.347 [2024-12-06 19:26:44.102238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.347 qpair failed and we were unable to recover it. 00:27:59.347 [2024-12-06 19:26:44.102366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.347 [2024-12-06 19:26:44.102389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.347 qpair failed and we were unable to recover it. 00:27:59.347 [2024-12-06 19:26:44.102519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.347 [2024-12-06 19:26:44.102543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.347 qpair failed and we were unable to recover it. 00:27:59.347 [2024-12-06 19:26:44.102665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.347 [2024-12-06 19:26:44.102691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.347 qpair failed and we were unable to recover it. 00:27:59.347 [2024-12-06 19:26:44.102806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.347 [2024-12-06 19:26:44.102832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.347 qpair failed and we were unable to recover it. 00:27:59.347 [2024-12-06 19:26:44.102919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.347 [2024-12-06 19:26:44.102943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.347 qpair failed and we were unable to recover it. 00:27:59.347 [2024-12-06 19:26:44.103036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.103060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.103242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.103267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.103411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.103436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.103579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.103618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.103756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.103781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.103905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.103956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.104146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.104211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.104381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.104433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.104535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.104559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.104669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.104693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.104861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.104912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.105046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.105096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.105191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.105255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.105390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.105414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.105547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.105577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.105693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.105744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.105838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.105864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.105945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.105970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.106099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.106123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.106288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.106325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.106412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.106436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.106557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.106581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.106695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.106761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.106871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.106897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.106988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.107014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.107165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.107233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.107418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.107483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.107618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.107667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.107874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.348 [2024-12-06 19:26:44.107900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.348 qpair failed and we were unable to recover it. 00:27:59.348 [2024-12-06 19:26:44.108021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.108069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.108186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.108250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.108395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.108451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.108575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.108599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.108701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.108810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.108960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.108985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.109082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.109105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.109242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.109266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.109399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.109423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.109553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.109578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.109691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.109738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.109883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.109908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.110034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.110063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.110164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.110188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.110349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.110372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.110501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.110525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.110659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.110683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.110850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.110887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.111024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.111050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.111155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.111180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.111312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.111336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.111481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.111516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.111727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.111778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.111953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.111978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.112108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.112196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.112343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.112366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.112471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.112495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.112629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.112653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.112793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.349 [2024-12-06 19:26:44.112841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.349 qpair failed and we were unable to recover it. 00:27:59.349 [2024-12-06 19:26:44.112960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.113016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.113161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.113185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.113313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.113337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.113459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.113483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.113570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.113594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.113729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.113755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.113874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.113899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.114039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.114063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.114234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.114257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.114387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.114410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.114522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.114550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.114677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.114702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.114824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.114848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.114979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.115017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.115114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.115138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.115294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.115318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.115484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.115508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.115634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.115672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.115821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.115874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.116004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.116030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.116214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.116239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.116500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.116567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.116773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.116799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.116963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.117010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.117248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.117314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.117491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.117565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.117792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.117819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.117940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.117965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.350 [2024-12-06 19:26:44.118109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.350 [2024-12-06 19:26:44.118133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.350 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.118375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.118440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.118582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.118613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.118713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.118746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.118934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.118958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.119065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.119103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.119264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.119313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.119474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.119522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.119716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.119745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.119852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.119877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.120040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.120088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.120260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.120309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.120448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.120497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.120663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.120712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.120899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.120923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.121042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.121096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.121261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.121310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.121500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.121549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.121704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.121776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.121898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.121924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.122056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.122095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.122210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.122280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.122458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.122520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.122714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.122747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.122883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.122908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.123053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.123106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.123251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.123308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.123497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.123545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.123678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.123741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.123937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.351 [2024-12-06 19:26:44.123961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.351 qpair failed and we were unable to recover it. 00:27:59.351 [2024-12-06 19:26:44.124099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.124147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.124339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.124387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.124524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.124566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.124742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.124788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.124921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.124946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.125072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.125096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.125207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.125231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.125376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.125425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.125578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.125636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.125791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.125817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.125968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.126010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.126149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.126173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.126326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.126349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.126460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.126484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.126681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.126743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.126916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.126940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.127066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.127138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.127324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.127372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.127533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.127581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.127756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.127798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.127944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.127982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.128064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.128107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.128298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.128348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.128498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.128553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.128715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.128780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.128900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.128925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.129076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.129099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.352 [2024-12-06 19:26:44.129256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.352 [2024-12-06 19:26:44.129304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.352 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.129470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.129518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.129675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.129712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.129880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.129904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.130043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.130092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.130281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.130336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.130528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.130576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.130764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.130815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.130928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.130952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.131061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.131096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.131332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.131381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.131622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.131657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.131770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.131806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.131908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.131944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.132078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.132113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.132309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.132357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.132529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.132579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.132744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.132781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.132920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.132957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.133157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.133217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.133396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.133442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.133550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.133587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.353 qpair failed and we were unable to recover it. 00:27:59.353 [2024-12-06 19:26:44.133799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.353 [2024-12-06 19:26:44.133853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.133957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.133994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.134241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.134289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.134466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.134525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.134700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.134745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.134887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.134924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.135140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.135188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.135407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.135445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.135568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.135638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.135848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.135885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.136046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.136086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.136223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.136272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.136489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.136537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.136687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.136736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.136930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.136978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.137134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.137206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.137389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.137433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.137544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.137583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.137823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.137863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.138000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.138042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.138243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.138292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.138494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.138542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.138744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.138786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.138937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.138984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.139181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.139231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.139472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.139513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.139697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.139765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.139951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.139992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.140174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.354 [2024-12-06 19:26:44.140216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.354 qpair failed and we were unable to recover it. 00:27:59.354 [2024-12-06 19:26:44.140370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.140430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.140631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.140679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.140858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.140913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.141119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.141178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.141394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.141443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.141672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.141770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.142076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.142119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.142318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.142398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.142640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.142684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.142859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.142913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.143048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.143096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.143282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.143328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.143510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.143570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.143820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.143867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.144120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.144165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.144359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.144415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.144638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.144687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.144873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.144919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.145087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.145144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.145363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.145424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.145597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.145643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.145888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.145936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.146185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.146253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.146547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.146593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.146804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.146851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.147111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.147179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.147420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.147468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.147638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.147687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.147932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.147981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.355 [2024-12-06 19:26:44.148145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.355 [2024-12-06 19:26:44.148194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.355 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.148368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.148437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.148652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.148700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.149100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.149172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.149360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.149428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.149634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.149682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.149926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.149976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.150210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.150283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.150637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.150685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.151066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.151131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.151314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.151380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.151602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.151660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.151847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.151913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.152195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.152271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.152502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.152568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.152796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.152864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.153036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.153101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.153288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.153365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.153538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.153587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.153780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.153834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.154007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.154066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.154248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.154297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.154477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.154526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.154776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.154845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.155016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.155064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.155225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.155282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.155448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.155508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.155695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.155754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.155947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.155996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.156160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.156209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.356 qpair failed and we were unable to recover it. 00:27:59.356 [2024-12-06 19:26:44.156374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.356 [2024-12-06 19:26:44.156422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.357 qpair failed and we were unable to recover it. 00:27:59.357 [2024-12-06 19:26:44.156615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.357 [2024-12-06 19:26:44.156664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.357 qpair failed and we were unable to recover it. 00:27:59.357 [2024-12-06 19:26:44.156879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.357 [2024-12-06 19:26:44.156936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.357 qpair failed and we were unable to recover it. 00:27:59.357 [2024-12-06 19:26:44.157182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.357 [2024-12-06 19:26:44.157231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.357 qpair failed and we were unable to recover it. 00:27:59.357 [2024-12-06 19:26:44.157423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.357 [2024-12-06 19:26:44.157472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.357 qpair failed and we were unable to recover it. 00:27:59.357 [2024-12-06 19:26:44.157637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.357 [2024-12-06 19:26:44.157685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.357 qpair failed and we were unable to recover it. 00:27:59.357 [2024-12-06 19:26:44.157868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.357 [2024-12-06 19:26:44.157917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.357 qpair failed and we were unable to recover it. 00:27:59.357 [2024-12-06 19:26:44.158119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.357 [2024-12-06 19:26:44.158167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.357 qpair failed and we were unable to recover it. 00:27:59.357 [2024-12-06 19:26:44.158473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.357 [2024-12-06 19:26:44.158521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.357 qpair failed and we were unable to recover it. 00:27:59.357 [2024-12-06 19:26:44.158686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.357 [2024-12-06 19:26:44.158747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.357 qpair failed and we were unable to recover it. 00:27:59.357 [2024-12-06 19:26:44.158881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.357 [2024-12-06 19:26:44.158913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.357 qpair failed and we were unable to recover it. 00:27:59.357 [2024-12-06 19:26:44.159033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.357 [2024-12-06 19:26:44.159064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.357 qpair failed and we were unable to recover it. 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Write completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Write completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Write completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Write completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Write completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Write completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Write completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Write completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Write completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Write completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Write completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Write completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Write completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Write completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 Read completed with error (sct=0, sc=8) 00:27:59.357 starting I/O failed 00:27:59.357 [2024-12-06 19:26:44.159449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.357 [2024-12-06 19:26:44.159682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.357 [2024-12-06 19:26:44.159785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.357 qpair failed and we were unable to recover it. 00:27:59.357 [2024-12-06 19:26:44.159930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.357 [2024-12-06 19:26:44.159965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.357 qpair failed and we were unable to recover it. 00:27:59.357 [2024-12-06 19:26:44.160163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.357 [2024-12-06 19:26:44.160232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.357 qpair failed and we were unable to recover it. 00:27:59.357 [2024-12-06 19:26:44.160541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.357 [2024-12-06 19:26:44.160618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.357 qpair failed and we were unable to recover it. 00:27:59.357 [2024-12-06 19:26:44.160892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.357 [2024-12-06 19:26:44.160919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.357 qpair failed and we were unable to recover it. 00:27:59.357 [2024-12-06 19:26:44.161033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.161060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.161272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.161304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.161460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.161521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.161694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.161784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.161945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.161977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.162134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.162205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.162411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.162476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.162669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.162717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.162863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.162895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.163056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.163127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.163301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.163333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.163452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.163483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.163659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.163708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.163855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.163886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.164044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.164102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.164311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.164362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.164552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.164606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.164818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.164851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.165024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.165080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.165265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.165297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.165419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.165450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.165637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.165685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.165877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.165908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.166003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.166035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.166216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.166265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.166443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.166502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.166670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.166718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.166855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.166886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.167031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.167062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.167231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.358 [2024-12-06 19:26:44.167280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.358 qpair failed and we were unable to recover it. 00:27:59.358 [2024-12-06 19:26:44.167458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.167517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.167694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.167731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.167844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.167877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.167981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.168036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.168241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.168273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.168450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.168499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.168696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.168759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.168885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.168916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.169056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.169088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.169285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.169356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.169547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.169600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.169841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.169873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.169976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.170025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.170169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.170200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.170365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.170396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.170622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.170680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.170894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.170926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.171067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.171115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.171372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.171420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.171563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.171633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.171838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.171864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.171950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.171976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.172118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.172150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.172292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.172323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.359 qpair failed and we were unable to recover it. 00:27:59.359 [2024-12-06 19:26:44.172471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.359 [2024-12-06 19:26:44.172520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.172759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.172810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.172923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.172949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.173137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.173186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.173377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.173443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.173608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.173662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.173846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.173878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.174036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.174067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.174204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.174252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.174418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.174467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.174664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.174713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.174862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.174893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.175052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.175100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.175315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.175364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.175504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.175563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.175770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.175802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.175940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.175971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.176087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.176146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.176420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.176479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.176657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.176689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.176863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.176890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.176984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.177027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.177197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.177228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.177412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.177461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.177625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.177685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.177842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.177873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.178005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.178036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.178207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.178255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.360 qpair failed and we were unable to recover it. 00:27:59.360 [2024-12-06 19:26:44.178430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.360 [2024-12-06 19:26:44.178479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.178700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.178773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.178912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.178972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.179202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.179251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.179414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.179462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.179671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.179735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.179894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.179967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.180145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.180212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.180397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.180464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.180630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.180678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.180899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.180948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.181083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.181132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.181280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.181340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.181508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.181557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.181780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.181831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.181995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.182054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.182229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.182285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.182487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.182546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.182683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.182752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.182924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.182972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.183144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.183202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.183398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.183446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.183647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.183695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.183872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.183933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.184068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.184116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.184344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.184392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.184565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.184624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.184793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.184842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.184983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.361 [2024-12-06 19:26:44.185032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.361 qpair failed and we were unable to recover it. 00:27:59.361 [2024-12-06 19:26:44.185277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.185325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.185532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.185581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.185715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.185795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.186000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.186072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.186254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.186319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.186463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.186512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.186820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.186874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.187139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.187205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.187434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.187482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.187667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.187715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.187897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.187946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.188148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.188199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.188341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.188389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.188615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.188664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.188895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.188963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.189113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.189188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.189391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.189458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.189681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.189742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.189921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.189988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.190195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.190261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.190391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.190439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.190612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.190660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.190847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.190926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.191114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.191163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.191334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.191382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.191586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.191635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.191839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.191889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.192086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.192160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.362 [2024-12-06 19:26:44.192310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.362 [2024-12-06 19:26:44.192358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.362 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.192527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.192576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.192745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.192794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.192988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.193037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.193230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.193279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.193441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.193499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.193695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.193799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.193993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.194042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.194206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.194255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.194431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.194483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.194679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.194741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.194917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.194966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.195158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.195209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.195348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.195397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.195626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.195674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.195867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.195935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.196098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.196146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.196305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.196353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.196533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.196591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.196793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.196843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.197007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.197061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.197213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.197273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.197449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.197497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.197669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.197717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.197894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.197943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.198167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.198215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.198357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.198405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.198661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.198709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.198887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.198947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.199130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.199178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.363 [2024-12-06 19:26:44.199349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.363 [2024-12-06 19:26:44.199398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.363 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.199575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.199635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.199821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.199870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.200007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.200056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.200268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.200316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.200515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.200575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.200773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.200823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.200958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.201008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.201187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.201235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.201432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.201493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.201633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.201682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.201927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.201975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.202113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.202162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.202345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.202394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.202617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.202664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.202856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.202925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.203094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.203146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.203294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.203351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.203521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.203580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.203778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.203827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.203995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.204044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.204221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.204269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.204415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.204463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.204670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.204719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.204942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.204991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.205151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.205199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.205372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.205421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.205610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.205664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.205858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.205908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.206094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.206149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.364 [2024-12-06 19:26:44.206320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.364 [2024-12-06 19:26:44.206380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.364 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.206555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.206604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.206748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.206797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.206974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.207034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.207190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.207239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.207415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.207463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.207670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.207734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.207875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.207923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.208067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.208115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.208304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.208352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.208530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.208579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.208717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.208778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.208940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.208989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.209157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.209206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.209376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.209424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.209587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.209636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.209781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.209830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.210049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.210097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.210262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.210311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.210509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.210565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.210747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.210800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.210952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.211001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.211155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.211205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.211367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.211415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-12-06 19:26:44.211588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.365 [2024-12-06 19:26:44.211637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.211784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.211833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.212055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.212114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.212330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.212378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.212547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.212595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.212776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.212825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.213028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.213077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.213247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.213322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.213467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.213515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.213807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.213875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.214036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.214108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.214324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.214373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.214543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.214591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.214824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.214873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.215008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.215066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.215299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.215364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.215499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.215546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.215732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.215792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.216022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.216093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.216296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.216353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.216559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.216617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.216806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.216889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.217089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.217155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.217471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.217518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.217744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.217805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.218008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.218080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.218306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.218354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.218573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.218622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.218790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.366 [2024-12-06 19:26:44.218874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-12-06 19:26:44.219170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.219235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.219483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.219549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.219891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.219950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.220208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.220257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.220606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.220655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.220838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.220911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.221188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.221271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.221555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.221621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.221849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.221915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.222121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.222188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.222417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.222483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.222645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.222700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.222944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.223012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.223219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.223288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.223426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.223475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.223692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.223752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.223949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.224018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.224172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.224243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.224469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.224535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.224709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.224776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.224958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.225029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.225241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.225294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.225516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.225574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.225886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.225961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.226150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.226224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.226397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.226445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.226616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.226664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.226850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.226906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.227082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.367 [2024-12-06 19:26:44.227130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-12-06 19:26:44.227332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.227380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.227554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.227609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.227782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.227842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.228046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.228094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.228261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.228331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.228492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.228551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.228777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.228826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.228990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.229038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.229222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.229281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.229452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.229500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.229688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.229751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.229906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.229955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.230145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.230194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.230423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.230471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.230612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.230661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.230837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.230886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.231016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.231065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.231234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.231300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.231530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.231587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.231767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.231816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.231984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.232042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.232334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.232382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.232611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.232659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.232851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.232918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.233144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.233209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.233430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.233478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.233678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.233736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.233901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.233974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.234223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.368 [2024-12-06 19:26:44.234288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.368 qpair failed and we were unable to recover it. 00:27:59.368 [2024-12-06 19:26:44.234508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.234556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.234699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.234757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.234923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.234995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.235153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.235223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.235428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.235476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.235642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.235690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.235855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.235904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.236089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.236138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.236361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.236420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.236628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.236677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.236839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.236888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.237051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.237120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.237418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.237466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.237631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.237679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.237832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.237882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.238062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.238130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.238318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.238366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.238551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.238599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.238771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.238820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.238987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.239040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.239211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.239259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.239406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.239454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.239627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.239675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.239848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.239896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.240129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.240178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.240385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.240443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.240764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.240813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.240951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.240999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.241179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.241256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.241424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.241478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.369 [2024-12-06 19:26:44.241648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.369 [2024-12-06 19:26:44.241695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.369 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.241916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.241983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.242214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.242281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.242505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.242552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.242742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.242792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.242993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.243060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.243220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.243278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.243468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.243517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.243732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.243781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.243932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.244005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.244234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.244301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.244481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.244540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.244749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.244799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.244937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.244992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.245185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.245252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.245472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.245531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.245740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.245789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.245978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.246033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.246167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.246215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.246395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.246449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.246625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.246685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.246870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.246919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.247054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.247101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.247409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.247457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.247656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.247704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.247884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.247941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.248095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.248143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.248365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.248425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.248602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.248661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.370 [2024-12-06 19:26:44.248880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.370 [2024-12-06 19:26:44.248928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.370 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.249123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.249191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.249372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.249430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.249619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.249667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.249843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.249903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.250143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.250190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.250365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.250423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.250711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.250772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.250921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.250969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.251189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.251238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.251432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.251502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.251676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.251739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.251990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.252059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.252313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.252379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.252561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.252609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.252764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.252814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.252944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.252992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.253182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.253255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.253542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.253590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.253804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.253871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.254097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.254163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.254456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.254531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.254693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.254752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.254970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.255036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.255270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.255319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.255560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.371 [2024-12-06 19:26:44.255607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.371 qpair failed and we were unable to recover it. 00:27:59.371 [2024-12-06 19:26:44.255860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.255927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.256167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.256234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.256431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.256495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.256668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.256716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.256949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.257016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.257198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.257264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.257439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.257488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.257671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.257719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.257891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.257949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.258252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.258301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.258492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.258554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.258777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.258848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.259055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.259104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.259287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.259336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.259559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.259618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.259807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.259872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.260101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.260165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.260344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.260408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.260583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.260629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.260842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.260888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.261082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.261127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.261347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.261393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.261571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.261628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.261803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.261861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.262048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.262107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.262277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.262336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.262518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.262572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.262797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.262866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.263167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.263216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.263408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.263480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.263654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.372 [2024-12-06 19:26:44.263709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.372 qpair failed and we were unable to recover it. 00:27:59.372 [2024-12-06 19:26:44.263911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.263969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.264140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.264197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.264343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.264405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.264589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.264651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.264969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.265019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.265213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.265262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.265449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.265514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.265745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.265796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.266011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.266060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.266245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.266293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.266581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.266632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.266874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.266923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.267146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.267214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.267421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.267469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.267622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.267687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.267957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.268023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.268227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.268294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.268449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.268505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.268757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.268807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.268995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.269071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.269327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.269379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.269592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.269640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.270011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.270084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.270334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.270387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.270545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.270594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.270769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.270825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.271006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.271074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.271316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.271382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.271574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.373 [2024-12-06 19:26:44.271628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.373 qpair failed and we were unable to recover it. 00:27:59.373 [2024-12-06 19:26:44.271850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.271917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.272165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.272233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.272405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.272454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.272655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.272703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.272926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.273009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.273286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.273335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.273540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.273588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.273758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.273807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.273968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.274063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.274259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.274334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.274512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.274571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.274748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.274801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.274983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.275031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.275201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.275251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.275458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.275508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.275712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.275774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.275949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.276004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.276222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.276275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.276507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.276556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.276758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.276818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.276968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.277027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.277199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.277258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.277500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.277550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.277746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.277795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.278016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.278074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.278223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.278271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.278560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.278609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.278932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.279000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.374 [2024-12-06 19:26:44.279296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.374 [2024-12-06 19:26:44.279369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.374 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.279624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.279680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.279998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.280079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.280386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.280459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.280686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.280763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.281068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.281145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.281377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.281444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.281644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.281691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.281890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.281970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.282191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.282266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.282441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.282520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.282693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.282757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.282977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.283045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.283392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.283451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.283628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.283685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.283884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.283935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.284111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.284160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.284382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.284435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.284813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.284872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.285096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.285163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.285312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.285406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.285666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.285734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.285922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.285988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.286215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.286287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.286514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.286592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.286779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.286852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.287158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.287229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.287534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.287623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.287953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.288031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.288267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.375 [2024-12-06 19:26:44.288336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.375 qpair failed and we were unable to recover it. 00:27:59.375 [2024-12-06 19:26:44.288474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.288522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.288661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.288710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.289035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.289113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.289280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.289354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.289557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.289612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.289820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.289887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.290089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.290154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.290339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.290388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.290620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.290670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.290990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.291062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.291421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.291492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.291708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.291772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.291961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.292042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.292332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.292410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.292606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.292652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.292875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.292946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.293198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.293268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.293424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.293495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.293628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.293677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.293979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.294048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.294218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.294292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.294479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.294528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.294708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.294795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.294974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.295045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.295218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.295278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.295448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.295505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.295750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.295803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.296013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.296062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.376 qpair failed and we were unable to recover it. 00:27:59.376 [2024-12-06 19:26:44.296196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.376 [2024-12-06 19:26:44.296254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.296405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.296456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.296689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.296768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.297122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.297194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.297556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.297625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.297789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.297839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.298063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.298131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.298428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.298496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.298677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.298741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.298993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.299043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.299243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.299310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.299497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.299546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.299809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.299890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.300091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.300158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.300360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.300409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.300604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.300654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.300869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.300919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.301073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.301122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.301298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.301347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.301531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.301580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.301778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.301829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.302054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.302112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.302481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.302530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.302781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.377 [2024-12-06 19:26:44.302833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.377 qpair failed and we were unable to recover it. 00:27:59.377 [2024-12-06 19:26:44.303129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.303212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.303509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.303576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.303889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.303957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.304147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.304226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.304430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.304480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.304622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.304676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.304931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.304999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.305202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.305250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.305424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.305474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.305671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.305739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.305925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.305974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.306157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.306205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.306394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.306455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.306661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.306712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.307020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.307069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.307280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.307329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.307505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.307565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.307807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.307858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.308055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.308104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.308281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.308332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.308563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.308620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.308822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.308896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.309100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.309166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.309300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.309353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.309610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.309664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.309923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.309991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.310337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.310408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.310625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.310675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.378 [2024-12-06 19:26:44.310916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.378 [2024-12-06 19:26:44.310985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.378 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.311281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.311359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.311656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.311705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.312080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.312167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.312475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.312542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.312750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.312811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.313037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.313109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.313314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.313362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.313566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.313617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.313806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.313877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.314057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.314123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.314342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.314408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.314593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.314650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.314819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.314899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.315104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.315153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.315284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.315332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.315553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.315615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.315855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.315921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.316115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.316183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.316356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.316405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.316592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.316641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.316834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.316910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.317097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.317172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.317379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.317429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.317600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.317648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.317835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.317884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.318100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.318157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.318379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.318436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.318577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.318625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.318852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.318912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.319106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.319155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.379 [2024-12-06 19:26:44.319371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.379 [2024-12-06 19:26:44.319430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.379 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.319600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.319658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.319797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.319846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.320078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.320160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.320342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.320407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.320551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.320599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.320793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.320868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.321021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.321069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.321250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.321310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.321615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.321665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.321858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.321909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.322129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.322188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.322338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.322386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.322534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.322583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.322946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.322997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.323177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.323244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.323463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.323513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.323840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.323910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.324084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.324160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.324380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.324428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.324626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.324675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.324924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.324999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.325198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.325247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.325534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.325583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.325806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.325869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.326056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.326116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.326257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.326305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.326599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.326649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.380 [2024-12-06 19:26:44.326896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.380 [2024-12-06 19:26:44.326946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.380 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.327134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.327209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.327359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.327408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.327580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.327641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.327825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.327883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.328058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.328116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.328310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.328366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.328682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.328753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.328996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.329044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.329283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.329331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.329530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.329580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.329826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.329897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.330102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.330170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.330342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.330413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.330600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.330661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.330887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.330937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.331157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.331217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.331400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.331448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.331636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.331686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.331925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.331976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.332291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.332340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.332561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.332621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.332854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.332906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.333130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.333199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.333370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.333441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.333606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.333661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.333979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.334047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.334331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.334399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.334570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.334628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-06 19:26:44.334933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.381 [2024-12-06 19:26:44.334983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.335166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.335239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.335407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.335456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.335635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.335691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.335939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.335996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.336361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.336433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.336595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.336651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.336836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.336890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.337113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.337171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.337364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.337424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.337576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.337626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.337801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.337861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.338086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.338135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.338355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.338417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.338630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.338679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.338848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.338923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.339118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.339184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.339339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.339398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.339571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.339621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.339831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.339881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.340053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.340106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.340305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.340353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.340590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.340641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.340870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.340920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.341111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.341160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.341330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.341388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.341625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.341682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.342013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.342063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.342263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.342333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.342529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.382 [2024-12-06 19:26:44.342579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-06 19:26:44.342749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.342803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.342991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.343064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.343244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.343292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.343463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.343519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.343784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.343835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.344053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.344102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.344298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.344346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.344510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.344559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.344785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.344836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.345131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.345181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.345567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.345623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.345798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.345860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.346045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.346117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.346291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.346366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.346742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.346807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.347103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.347173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.347391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.347458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.347648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.347706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.347957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.348027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.348255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.348323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.348541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.348589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.348816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.383 [2024-12-06 19:26:44.348886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-06 19:26:44.349193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.349267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.349565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.349635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.349924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.349992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.350216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.350286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.350509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.350559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.350767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.350819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.351005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.351074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.351369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.351447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.351797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.351848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.352023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.352099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.352394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.352461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.352647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.352715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.353035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.353106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.353312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.353389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.353627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.353675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.353932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.353983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.354193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.354258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.354424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.354493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.354693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.354757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.355061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.355139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.355457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.355526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.355708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.355801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.356017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.356096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.356294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.356360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.356514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.356571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.356828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.356896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.357191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.357240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.357489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.357537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.357904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.384 [2024-12-06 19:26:44.357955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.384 qpair failed and we were unable to recover it. 00:27:59.384 [2024-12-06 19:26:44.358184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.358250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.358439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.358508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.358764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.358815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.359113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.359195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.359494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.359560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.359846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.359921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.360087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.360177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.360415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.360482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.360691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.360762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.361061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.361137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.361495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.361563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.361922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.361993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.362350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.362424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.362634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.362699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.362933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.363007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.363303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.363376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.363681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.363739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.364103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.364179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.364373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.364441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.364664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.364712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.364962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.365037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.365195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.365277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.365485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.365552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.365736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.365787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.366002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.366069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.366233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.366303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.366524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.366572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.366750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.366803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.385 qpair failed and we were unable to recover it. 00:27:59.385 [2024-12-06 19:26:44.366966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.385 [2024-12-06 19:26:44.367044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.367295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.367345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.367546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.367599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.367834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.367905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.368122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.368198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.368424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.368491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.368711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.368792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.368982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.369067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.369281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.369353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.369574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.369623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.369858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.369931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.370227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.370302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.370648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.370698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.370937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.371007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.371234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.371301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.371503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.371596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.371811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.371880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.372092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.372166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.372401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.372468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.372730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.372780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.373039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.373116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.373341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.373409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.373628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.373687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.373869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.373947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.374252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.374333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.374651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.374702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.374932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.375001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.375179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.375258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.375508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.375586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.386 qpair failed and we were unable to recover it. 00:27:59.386 [2024-12-06 19:26:44.375827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.386 [2024-12-06 19:26:44.375894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.387 qpair failed and we were unable to recover it. 00:27:59.387 [2024-12-06 19:26:44.376155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.387 [2024-12-06 19:26:44.376222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.387 qpair failed and we were unable to recover it. 00:27:59.387 [2024-12-06 19:26:44.376438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.387 [2024-12-06 19:26:44.376521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.387 qpair failed and we were unable to recover it. 00:27:59.387 [2024-12-06 19:26:44.376776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.387 [2024-12-06 19:26:44.376834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.387 qpair failed and we were unable to recover it. 00:27:59.387 [2024-12-06 19:26:44.377022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.387 [2024-12-06 19:26:44.377100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.387 qpair failed and we were unable to recover it. 00:27:59.387 [2024-12-06 19:26:44.377259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.387 [2024-12-06 19:26:44.377336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.387 qpair failed and we were unable to recover it. 00:27:59.387 [2024-12-06 19:26:44.377539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.387 [2024-12-06 19:26:44.377588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.387 qpair failed and we were unable to recover it. 00:27:59.387 [2024-12-06 19:26:44.377817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.387 [2024-12-06 19:26:44.377889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.387 qpair failed and we were unable to recover it. 00:27:59.387 [2024-12-06 19:26:44.378061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.387 [2024-12-06 19:26:44.378136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.387 qpair failed and we were unable to recover it. 00:27:59.387 [2024-12-06 19:26:44.378340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.387 [2024-12-06 19:26:44.378388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.387 qpair failed and we were unable to recover it. 00:27:59.387 [2024-12-06 19:26:44.378528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.387 [2024-12-06 19:26:44.378577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.387 qpair failed and we were unable to recover it. 00:27:59.387 [2024-12-06 19:26:44.378861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.387 [2024-12-06 19:26:44.378922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.387 qpair failed and we were unable to recover it. 00:27:59.387 [2024-12-06 19:26:44.379148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.387 [2024-12-06 19:26:44.379218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.387 qpair failed and we were unable to recover it. 00:27:59.387 [2024-12-06 19:26:44.379416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.379467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.379664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.379716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.379917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.380001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.380260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.380328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.380496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.380548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.380700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.380772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.381004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.381058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.381323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.381374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.381548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.381610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.381865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.381936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.382161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.382232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.382538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.382600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.382769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.382844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.383033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.383118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.383351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.383405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.383586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.383645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.383852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.383920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.384140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.384214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.384384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.384434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.384635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.384688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.384899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.384968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.385191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.385240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.385403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.385461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.662 [2024-12-06 19:26:44.385667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.662 [2024-12-06 19:26:44.385740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.662 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.385981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.386030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.386251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.386299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.386450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.386509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.386697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.386767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.386950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.387007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.387230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.387279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.387477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.387525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.387711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.387796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.388089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.388139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.388520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.388572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.388764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.388815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.389005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.389082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.389337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.389403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.389604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.389653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.389826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.389906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.390105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.390172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.390355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.390432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.390809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.390876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.391166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.391216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.391415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.391482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.391691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.391748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.391903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.391962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.392136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.392198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.392426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.392485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.392684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.392772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.393015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.393064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.393328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.393396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.393575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.393633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.393866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.393916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.394133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.394203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.394507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.394558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.394803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.394871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.395115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.395189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.395390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.395469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.395741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.395792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.396018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.396068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.396313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.663 [2024-12-06 19:26:44.396385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.663 qpair failed and we were unable to recover it. 00:27:59.663 [2024-12-06 19:26:44.396559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.396611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.396808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.396863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.397102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.397160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.397332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.397381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.397605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.397663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.397999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.398077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.398257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.398315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.398487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.398547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.398717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.398780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.398960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.399010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.399167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.399223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.399403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.399464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.399704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.399770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.400089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.400138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.400331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.400392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.400563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.400626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.400869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.400919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.401057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.401106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.401347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.401431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.401741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.401815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.402113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.402181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.402499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.402565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.402732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.402783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.402965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.403036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.403259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.403326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.403488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.403544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.403747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.403813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.404045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.404113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.404289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.404361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.404516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.404573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.404744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.404802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.405093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.405144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.405367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.405417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.405651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.405699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.405924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.406006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.406265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.406334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.406532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.406580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.406780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.406862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.664 qpair failed and we were unable to recover it. 00:27:59.664 [2024-12-06 19:26:44.407098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.664 [2024-12-06 19:26:44.407177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.407447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.407513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.407751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.407800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.407976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.408049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.408275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.408342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.408536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.408597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.408826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.408894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.409209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.409276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.409466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.409521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.409772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.409830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.410139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.410206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.410396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.410466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.410693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.410768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.410989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.411063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.411214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.411287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.411470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.411531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.411682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.411752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.411979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.412075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.412391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.412460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.412622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.412682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.412951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.413018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.413242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.413341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.413566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.413616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.413800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.413872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.414024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.414103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.414332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.414381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.414629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.414688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.414903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.414971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.415198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.415266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.415407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.415468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.415793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.415870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.416056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.416128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.416344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.416395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.416597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.416649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.416831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.416905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.417158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.417228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.417381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.417427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.417599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.417648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.665 qpair failed and we were unable to recover it. 00:27:59.665 [2024-12-06 19:26:44.417838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.665 [2024-12-06 19:26:44.417891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.418046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.418112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.418318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.418367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.418539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.418587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.418764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.418827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.419015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.419064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.419266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.419324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.419591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.419639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.419879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.419957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.420189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.420254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.420449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.420499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.420714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.420781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.420998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.421064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.421335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.421409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.421670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.421718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.422124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.422193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.422483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.422550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.422835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.422903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.423141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.423210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.423415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.423489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.423704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.423764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.423947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.424013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.424200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.424249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.424431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.424492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.424751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.424802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.425001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.425049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.425267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.425315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.425560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.425608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.425830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.425881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.426050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.426104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.426327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.426376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.426618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.666 [2024-12-06 19:26:44.426666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.666 qpair failed and we were unable to recover it. 00:27:59.666 [2024-12-06 19:26:44.427111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.427184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.427468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.427535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.427737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.427788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.428012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.428077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.428377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.428444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.428719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.428801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.429155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.429225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.429510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.429577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.429802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.429872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.430099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.430169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.430447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.430513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.430658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.430718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.430944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.431017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.431267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.431315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.431519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.431569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.431753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.431804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.432089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.432156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.432384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.432451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.432693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.432774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.433050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.433116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.433409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.433476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.433700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.433776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.434004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.434070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.434256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.434322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.434515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.434580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.434858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.434935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.435204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.435271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.435445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.435510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.435707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.435767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.436029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.436082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.436357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.436407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.436565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.436621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.436846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.436914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.437212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.437287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.437585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.437633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.437819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.437895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.438206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.438293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.667 [2024-12-06 19:26:44.438553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.667 [2024-12-06 19:26:44.438602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.667 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.438888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.438957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.439261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.439350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.439638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.439694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.439943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.440013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.440311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.440364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.440608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.440657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.440916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.440991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.441283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.441351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.441630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.441681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.441952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.442018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.442299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.442365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.442590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.442641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.442902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.442968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.443250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.443317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.443554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.443603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.443901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.443970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.444253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.444319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.444599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.444648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.444888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.444939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.445166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.445233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.445525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.445591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.445844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.445910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.446155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.446222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.446420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.446485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.446691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.446749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.446976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.447025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.447316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.447385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.447630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.447679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.447955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.448022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.448283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.448361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.448627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.448676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.448955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.449023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.449244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.449311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.449561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.449644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.449896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.449963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.450227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.450293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.450569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.450647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.668 qpair failed and we were unable to recover it. 00:27:59.668 [2024-12-06 19:26:44.450957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.668 [2024-12-06 19:26:44.451024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.451220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.451287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.451564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.451629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.451888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.451970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.452280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.452348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.452633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.452681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.452992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.453060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.453317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.453389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.453659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.453706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.453964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.454030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.454298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.454366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.454609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.454664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.454908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.454975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.455220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.455290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.455584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.455651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.455894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.455963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.456260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.456330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.456623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.456673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.456993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.457059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.457345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.457411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.457691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.457758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.458075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.458142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.458387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.458455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.458743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.458799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.459044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.459109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.459360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.459432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.459669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.459740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.459964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.460013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.460220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.460288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.460516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.460584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.460852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.460911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.461204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.461270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.461514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.461581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.461870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.461947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.462233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.462309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.462549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.462616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.462883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.462959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.463254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.669 [2024-12-06 19:26:44.463321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.669 qpair failed and we were unable to recover it. 00:27:59.669 [2024-12-06 19:26:44.463589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.463638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.463885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.463951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.464179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.464247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.464476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.464543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.464769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.464818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.465086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.465135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.465337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.465403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.465748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.465796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.466073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.466140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.466483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.466554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.466851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.466900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.467197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.467262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.467551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.467619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.467760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.467809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.468052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.468101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.468410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.468478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.468659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.468706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.468989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.469061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.469295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.469361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.469556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.469604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.469838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.469905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.470168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.470234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.470508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.470574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.470811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.470879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.471142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.471208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.471453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.471519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.471784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.471833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.472122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.472187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.472412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.472478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.472699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.472767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.473024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.473092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.473279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.473346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.473499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.473548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.473842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.473911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.474196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.474260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.474542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.474608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.474892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.474959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.475229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.670 [2024-12-06 19:26:44.475293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.670 qpair failed and we were unable to recover it. 00:27:59.670 [2024-12-06 19:26:44.475499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.475556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.475796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.475867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.476098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.476164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.476425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.476473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.476663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.476712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.477003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.477072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.477354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.477418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.477645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.477694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.477961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.478028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.478231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.478298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.478540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.478606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.478849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.478918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.479188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.479255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.479548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.479612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.479865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.479933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.480163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.480230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.480504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.480569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.480828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.480896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.481185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.481252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.481520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.481585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.481813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.481879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.482160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.482225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.482495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.482561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.482843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.482910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.483085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.483153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.483449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.483515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.483793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.483841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.484138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.484206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.484478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.484544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.484831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.484899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.485163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.485229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.485456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.485522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.485777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.485826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.486056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.486121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.671 qpair failed and we were unable to recover it. 00:27:59.671 [2024-12-06 19:26:44.486336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.671 [2024-12-06 19:26:44.486401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.486676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.486737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.487017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.487084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.487382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.487446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.487680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.487741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.487966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.488015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.488281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.488354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.488640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.488688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.488952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.489001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.489220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.489286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.489552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.489617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.489849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.489917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.490198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.490262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.490468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.490535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.490758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.490828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.491024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.491088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.491311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.491376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.491652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.491701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.491934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.492000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.492301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.492366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.492654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.492702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.492957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.493024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.493250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.493316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.493542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.493589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.493865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.493932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.494217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.494282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.494572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.494639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.494935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.495002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.495219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.495286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.495562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.495627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.495901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.495970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.496205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.496271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.496561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.496627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.496863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.496930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.497210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.497275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.497520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.497586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.497869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.497937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.498207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.498271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.672 [2024-12-06 19:26:44.498500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.672 [2024-12-06 19:26:44.498564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.672 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.498856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.498922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.499152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.499218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.499501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.499566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.499835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.499903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.500217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.500283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.500552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.500599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.500817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.500885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.501157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.501231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.501520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.501585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.501858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.501925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.502216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.502282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.502509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.502557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.502798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.502869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.503138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.503204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.503436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.503501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.503769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.503819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.504058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.504123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.504402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.504469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.504764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.504812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.505016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.505082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.505370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.505437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.505708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.505768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.505939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.505986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.506250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.506315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.506564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.506629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.506858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.506907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.507133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.507199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.507477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.507542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.507827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.507893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.508175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.508239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.508536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.508601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.508876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.508944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.509186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.509252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.509544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.509611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.509858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.509925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.510204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.510269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.510461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.510527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.673 [2024-12-06 19:26:44.510769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.673 [2024-12-06 19:26:44.510818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.673 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.511102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.511168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.511446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.511511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.511803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.511870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.512109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.512173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.512455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.512521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.512794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.512844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.513096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.513161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.513439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.513504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.513708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.513769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.513991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.514064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.514281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.514346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.514585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.514633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.514870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.514918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.515193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.515258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.515549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.515615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.515892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.515959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.516240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.516307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.516586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.516653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.516911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.516978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.517212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.517279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.517563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.517629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.517867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.517934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.518133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.518203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.518499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.518564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.518846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.518913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.519196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.519262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.519518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.519583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.519847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.519914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.520196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.520263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.520502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.520569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.520844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.520910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.521188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.521252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.521525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.521591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.521815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.521882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.522160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.522223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.522502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.522569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.522869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.522937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.523169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.674 [2024-12-06 19:26:44.523235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.674 qpair failed and we were unable to recover it. 00:27:59.674 [2024-12-06 19:26:44.523505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.523553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.523786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.523858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.524143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.524209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.524406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.524471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.524745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.524795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.525067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.525131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.525358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.525424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.525655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.525703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.526001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.526069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.526347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.526412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.526691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.526750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.526967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.527022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.527313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.527380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.527644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.527691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.527980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.528029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.528308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.528373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.528662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.528710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.529010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.529058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.529300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.529365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.529656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.529738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.530025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.530072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.530310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.530375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.530606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.530654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.530891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.530939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.531218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.531283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.531564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.531630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.531860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.531928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.532175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.532243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.532473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.532538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.532780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.532829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.533124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.533190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.533465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.533530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.533817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.533884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.534157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.534223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.534500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.534563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.534810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.534877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.535125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.535189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.675 qpair failed and we were unable to recover it. 00:27:59.675 [2024-12-06 19:26:44.535457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.675 [2024-12-06 19:26:44.535504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.535749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.535798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.536063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.536110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.536405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.536469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.536697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.536769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.537049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.537097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.537365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.537431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.537634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.537682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.537918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.537967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.538198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.538264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.538540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.538605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.538886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.538954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.539228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.539293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.539581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.539646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.539950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.540025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.540325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.540390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.540668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.540715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.540930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.540995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.541229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.541294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.541579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.541643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.541927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.541993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.542281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.542346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.542612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.542660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.542959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.543026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.543259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.543325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.543610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.543677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.543987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.544066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.544361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.544428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.544748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.544798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.545040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.545113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.545396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.545464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.545745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.545794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.546072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.546150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.546442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.546510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.676 [2024-12-06 19:26:44.546796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.676 [2024-12-06 19:26:44.546869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.676 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.547168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.547237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.547512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.547580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.547848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.547897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.548185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.548253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.548564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.548632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.548893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.548941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.549174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.549256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.549562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.549614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.549935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.550004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.550292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.550360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.550596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.550647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.550900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.550969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.551209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.551282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.551564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.551630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.551915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.551982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.552277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.552343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.552626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.552676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.552942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.553015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.553281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.553356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.553632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.553680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.553991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.554061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.554350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.554416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.554698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.554758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.555047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.555112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.555423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.555491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.555794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.555869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.556146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.556214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.556494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.556564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.556808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.556859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.557140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.557210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.557483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.557550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.557834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.557889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.558195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.558261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.558568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.558636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.558931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.558999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.559301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.559370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.559648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.677 [2024-12-06 19:26:44.559697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.677 qpair failed and we were unable to recover it. 00:27:59.677 [2024-12-06 19:26:44.560009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.560088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.560329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.560401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.560666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.560748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.561040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.561106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.561385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.561462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.561752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.561803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.562094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.562161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.562450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.562527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.562813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.562864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.563106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.563180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.563453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.563524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.563778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.563829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.564070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.564138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.564373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.564439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.564670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.564719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.564989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.565060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.565355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.565421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.565716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.565778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.566056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.566124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.566379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.566427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.566692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.566755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.567033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.567092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.567385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.567454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.567688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.567750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.568029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.568077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.568358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.568426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.568700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.568787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.569085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.569165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.569439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.569508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.569785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.569835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.570117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.570187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.570436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.570504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.570798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.570863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.571058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.571125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.571429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.571497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.571793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.571842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.572090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.572167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.572412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.678 [2024-12-06 19:26:44.572481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.678 qpair failed and we were unable to recover it. 00:27:59.678 [2024-12-06 19:26:44.572709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.572771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.573056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.573122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.573354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.573421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.573644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.573691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.574009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.574079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.574364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.574438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.574711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.574775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.575074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.575156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.575441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.575508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.575800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.575849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.576088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.576154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.576446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.576521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.576786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.576836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.577148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.577197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.577487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.577555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.577752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.577801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.578086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.578153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.578394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.578474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.578761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.578811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.579092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.579159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.579430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.579495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.579768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.579826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.580124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.580190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.580470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.580536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.580811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.580908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.581229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.581304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.581596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.581672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.581974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.582044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.582325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.582393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.582691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.582753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.583035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.583103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.583400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.583449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.583681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.583740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.584033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.584103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.584310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.584377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.584676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.584748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.585026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.585075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.679 qpair failed and we were unable to recover it. 00:27:59.679 [2024-12-06 19:26:44.585356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.679 [2024-12-06 19:26:44.585424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.585705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.585771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.586036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.586085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.586330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.586398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.586610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.586658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.586934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.586983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.587269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.587335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.587619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.587685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.587990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.588069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.588338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.588404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.588697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.588760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.589028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.589076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.589359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.589426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.589657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.589706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.589997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.590052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.590283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.590349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.590628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.590676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.590964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.591014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.591293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.591360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.591643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.591709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.592011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.592091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.592363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.592430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.592659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.592708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.593001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.593050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.593339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.593406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.593677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.593738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.594009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.594057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.594331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.594397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.594599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.594647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.594895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.594944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.595220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.595287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.595558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.595625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.595907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.595974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.596242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.596307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.680 [2024-12-06 19:26:44.596589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.680 [2024-12-06 19:26:44.596656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.680 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.596945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.597012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.597300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.597366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.597608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.597655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.597939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.598007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.598237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.598304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.598546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.598612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.598902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.598971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.599252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.599318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.599607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.599673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.599935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.600003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.600213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.600279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.600549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.600616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.600888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.600957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.601233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.601300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.601575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.601640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.601921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.601988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.602265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.602331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.602600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.602649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.602941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.603009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.603301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.603376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.603650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.603699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.604005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.604075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.604354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.604420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.604693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.604768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.605046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.605112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.605408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.605473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.605748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.605798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.606070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.606137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.606333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.606399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.606669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.606718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.607015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.607063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.607270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.607337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.607574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.607641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.607888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.607937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.608212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.608279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.608550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.608618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.608893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.608942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.681 [2024-12-06 19:26:44.609228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.681 [2024-12-06 19:26:44.609295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.681 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.609581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.609646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.609943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.610009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.610276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.610342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.610622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.610669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.610920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.610987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.611253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.611320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.611590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.611657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.611897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.611964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.612250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.612317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.612601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.612666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.612973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.613041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.613325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.613391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.613656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.613703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.613954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.614021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.614256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.614322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.614617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.614682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.614980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.615046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.615282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.615346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.615611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.615678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.615955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.616021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.616258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.616322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.616596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.616675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.616965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.617032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.617274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.617341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.617557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.617605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.617865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.617931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.618154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.618220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.618509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.618574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.618857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.618923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.619156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.619221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.619499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.619565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.619849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.619917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.620195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.620260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.620526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.620592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.620880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.620947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.621237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.621303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.621575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.621624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.621829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.682 [2024-12-06 19:26:44.621897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.682 qpair failed and we were unable to recover it. 00:27:59.682 [2024-12-06 19:26:44.622123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.622187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.622479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.622544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.622822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.622889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.623160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.623226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.623498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.623564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.623839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.623888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.624083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.624148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.624424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.624473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.624701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.624760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.625048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.625114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.625401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.625470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.625747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.625797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.626033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.626082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.626331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.626398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.626613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.626661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.626947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.626997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.627277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.627343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.627610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.627658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.627951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.628000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.628269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.628336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.628620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.628687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.628985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.629050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.629276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.629341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.629567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.629642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.629893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.629960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.630185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.630250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.630528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.630595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.630823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.630890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.631167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.631233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.631517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.631583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.631856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.631923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.632194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.632259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.632524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.632589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.632838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.632905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.633186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.633251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.633483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.633550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.633832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.633900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.634202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.683 [2024-12-06 19:26:44.634268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.683 qpair failed and we were unable to recover it. 00:27:59.683 [2024-12-06 19:26:44.634491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.634539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.634830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.634908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.635181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.635247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.635520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.635568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.635847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.635912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.636138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.636203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.636432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.636498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.636771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.636819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.637104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.637172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.637447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.637513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.637744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.637792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.638032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.638081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.638323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.638389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.638615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.638663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.638952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.639002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.639286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.639352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.639627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.639691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.639999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.640078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.640306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.640373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.640597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.640645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.640908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.640974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.641243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.641309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.641580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.641645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.641964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.642030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.642271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.642337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.642611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.642667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.642968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.643034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.643306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.643371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.643648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.643695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.643986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.644054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.644341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.644406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.644687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.644747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.645029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.645098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.645327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.645392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.645654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.645702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.645966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.646033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.646307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.646373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.646602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.684 [2024-12-06 19:26:44.646669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.684 qpair failed and we were unable to recover it. 00:27:59.684 [2024-12-06 19:26:44.646958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.647026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.647302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.647368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.647643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.647691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.647896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.647962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.648194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.648259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.648507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.648574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.648815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.648884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.649165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.649231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.649507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.649574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.649857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.649924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.650191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.650258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.650532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.650600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.650831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.650897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.651165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.651231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.651459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.651525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.651805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.651872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.652160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.652226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.652512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.652578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.652847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.652916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.653195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.653262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.653534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.653582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.653871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.653937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.654165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.654229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.654453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.654517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.654794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.654843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.655094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.655158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.655436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.655503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.655745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.655803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.656035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.656102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.656348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.656413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.656578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.656626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.656903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.656969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.657255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.657322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.657607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.685 [2024-12-06 19:26:44.657673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.685 qpair failed and we were unable to recover it. 00:27:59.685 [2024-12-06 19:26:44.657961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.658028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.658300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.658365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.658637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.658684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.659002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.659068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.659336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.659403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.659667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.659715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.660016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.660091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.660345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.660411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.660641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.660688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.660949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.661014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.661290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.661356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.661636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.661683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.661988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.662053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.662231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.662297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.662510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.662576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.662864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.662932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.663210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.663275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.663556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.663622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.663897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.663963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.664194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.664269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.664542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.664609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.664875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.664940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.665135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.665201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.665433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.665498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.665646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.665694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.665892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.665961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.666188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.666254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.666472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.666521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.666703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.666767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.666979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.667026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.667238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.667286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.667480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.667528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.667745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.667794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.667982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.668055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.668294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.668360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.668576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.668624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.668860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.668929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.669159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.686 [2024-12-06 19:26:44.669225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.686 qpair failed and we were unable to recover it. 00:27:59.686 [2024-12-06 19:26:44.669421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.669488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.669672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.669731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.669908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.669973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.670162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.670230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.670439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.670488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.670669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.670718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.670886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.670934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.671165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.671213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.671440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.671488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.671687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.671748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.671902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.671951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.672104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.672152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.672392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.672440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.672714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.672790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.672948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.673019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.673200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.673268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.673494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.673560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.673806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.673874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.674114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.674180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.674362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.674429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.674618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.674666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.674878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.674927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.675166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.675214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.675419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.675485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.675638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.675687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.675885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.675934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.676112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.676161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.676341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.676390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.676570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.676617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.676798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.676847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.677027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.677074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.677298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.677347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.677569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.677618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.677829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.677878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.678100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.678148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.678365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.678444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.678693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.678755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.678950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.687 [2024-12-06 19:26:44.679017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.687 qpair failed and we were unable to recover it. 00:27:59.687 [2024-12-06 19:26:44.679261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.679327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.679506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.679554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.679743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.679793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.680003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.680073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.680265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.680342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.680522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.680570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.680803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.680852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.681059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.681108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.681386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.681435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.681617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.681664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.681896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.681945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.682236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.682284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.682521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.682568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.682817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.682884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.683082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.683149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.683455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.683519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.683749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.683798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.683970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.684038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.684289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.684337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.684584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.684632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.684855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.684923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.685118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.685184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.685437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.685503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.685678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.685737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.685942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.686009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.686256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.686304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.686544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.686592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.686851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.686918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.687200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.687266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.687516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.687582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.687831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.687898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.688202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.688269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.688510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.688557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.688824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.688892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.689157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.689224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.689516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.689582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.689827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.689894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.690115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.688 [2024-12-06 19:26:44.690188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.688 qpair failed and we were unable to recover it. 00:27:59.688 [2024-12-06 19:26:44.690390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.690456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.690609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.690657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.690863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.690942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.691193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.691259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.691529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.691578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.691884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.691937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.692226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.692293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.692515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.692563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.692798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.692869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.693157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.693206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.693402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.693453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.693613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.693662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.693885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.693952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.694206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.694283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.694547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.694594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.694836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.694901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.695182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.695251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.695554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.695606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.695884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.689 [2024-12-06 19:26:44.695953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.689 qpair failed and we were unable to recover it. 00:27:59.689 [2024-12-06 19:26:44.696194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.962 [2024-12-06 19:26:44.696259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.962 qpair failed and we were unable to recover it. 00:27:59.962 [2024-12-06 19:26:44.696510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.962 [2024-12-06 19:26:44.696560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.962 qpair failed and we were unable to recover it. 00:27:59.962 [2024-12-06 19:26:44.696836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.962 [2024-12-06 19:26:44.696905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.962 qpair failed and we were unable to recover it. 00:27:59.962 [2024-12-06 19:26:44.697179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.962 [2024-12-06 19:26:44.697250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.962 qpair failed and we were unable to recover it. 00:27:59.962 [2024-12-06 19:26:44.697478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.962 [2024-12-06 19:26:44.697536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.962 qpair failed and we were unable to recover it. 00:27:59.962 [2024-12-06 19:26:44.697813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.962 [2024-12-06 19:26:44.697890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.962 qpair failed and we were unable to recover it. 00:27:59.962 [2024-12-06 19:26:44.698090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.962 [2024-12-06 19:26:44.698160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.962 qpair failed and we were unable to recover it. 00:27:59.962 [2024-12-06 19:26:44.698432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.962 [2024-12-06 19:26:44.698508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.962 qpair failed and we were unable to recover it. 00:27:59.962 [2024-12-06 19:26:44.698746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.962 [2024-12-06 19:26:44.698798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.962 qpair failed and we were unable to recover it. 00:27:59.962 [2024-12-06 19:26:44.699054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.962 [2024-12-06 19:26:44.699114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.962 qpair failed and we were unable to recover it. 00:27:59.962 [2024-12-06 19:26:44.699408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.962 [2024-12-06 19:26:44.699477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.962 qpair failed and we were unable to recover it. 00:27:59.962 [2024-12-06 19:26:44.699691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.962 [2024-12-06 19:26:44.699767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.962 qpair failed and we were unable to recover it. 00:27:59.962 [2024-12-06 19:26:44.700009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.962 [2024-12-06 19:26:44.700090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.962 qpair failed and we were unable to recover it. 00:27:59.962 [2024-12-06 19:26:44.700353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.962 [2024-12-06 19:26:44.700418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.962 qpair failed and we were unable to recover it. 00:27:59.962 [2024-12-06 19:26:44.700682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.962 [2024-12-06 19:26:44.700753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.962 qpair failed and we were unable to recover it. 00:27:59.962 [2024-12-06 19:26:44.700940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.962 [2024-12-06 19:26:44.701005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.962 qpair failed and we were unable to recover it. 00:27:59.962 [2024-12-06 19:26:44.701220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.701283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.701551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.701618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.701890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.701958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.702143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.702210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.702448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.702524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.702809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.702877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.703170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.703236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.703493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.703560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.703746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.703795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.704046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.704114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.704408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.704473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.704775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.704825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.705026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.705093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.705384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.705449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.705746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.705795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.706035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.706085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.706362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.706427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.706706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.706770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.707052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.707101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.707380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.707444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.707598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.707646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.707887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.707936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.708201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.708266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.708536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.708601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.708849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.708918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.709214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.709281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.709468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.963 [2024-12-06 19:26:44.709535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.963 qpair failed and we were unable to recover it. 00:27:59.963 [2024-12-06 19:26:44.709750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.709820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.710052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.710119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.710340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.710406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.710690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.710750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.710935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.711009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.711290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.711356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.711639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.711687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.711900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.711967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.712258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.712324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.712599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.712665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.712976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.713042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.713314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.713381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.713649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.713698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.713995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.714060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.714299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.714365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.714585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.714633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.714856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.714923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.715200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.715267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.715501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.715566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.715774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.715848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.716018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.716088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.716335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.716401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.716739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.716789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.717087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.717153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.717439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.717506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.717780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.717829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.718092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.718156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.964 [2024-12-06 19:26:44.718386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.964 [2024-12-06 19:26:44.718452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.964 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.718686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.718745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.719020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.719068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.719347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.719395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.719596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.719645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.719902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.719951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.720254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.720320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.720555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.720621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.720906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.720975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.721257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.721323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.721576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.721642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.721874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.721942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.722191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.722257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.722545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.722612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.722816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.722885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.723114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.723181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.723408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.723473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.723701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.723769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.724102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.724167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.724438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.724504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.724777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.724826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.725103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.725168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.725436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.725502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.725689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.725749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.725978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.726027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.726224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.726289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.726519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.726585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.726843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.726911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.965 [2024-12-06 19:26:44.727180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.965 [2024-12-06 19:26:44.727247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.965 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.727546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.727613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.727904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.727972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.728262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.728327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.728580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.728628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.728903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.728970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.729237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.729304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.729569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.729635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.729887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.729955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.730178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.730244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.730517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.730583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.730815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.730883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.731188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.731255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.731523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.731590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.731875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.731942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.732222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.732288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.732549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.732598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.732808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.732879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.733161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.733228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.733496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.733562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.733802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.733871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.734128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.734192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.734471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.734536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.734782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.734849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.735105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.735170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.735442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.735507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.735744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.735793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.736026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.736096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.966 qpair failed and we were unable to recover it. 00:27:59.966 [2024-12-06 19:26:44.736299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.966 [2024-12-06 19:26:44.736365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.736610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.736665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.736941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.737010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.737305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.737371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.737618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.737666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.737947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.738015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.738288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.738354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.738628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.738676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.738964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.739032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.739251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.739322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.739592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.739659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.739964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.740039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.740367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.740416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.740576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.740624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.740874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.740942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.741181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.741247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.741483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.741550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.741828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.741896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.742160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.742208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.742486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.742535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.742745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.742794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.743046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.743113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.743386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.743451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.743699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.743757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.744068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.744116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.744402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.744469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.744668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.744715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.744976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.967 [2024-12-06 19:26:44.745044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.967 qpair failed and we were unable to recover it. 00:27:59.967 [2024-12-06 19:26:44.745289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.745355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.745625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.745673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.745934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.745984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.746217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.746282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.746568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.746633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.746928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.746996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.747281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.747347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.747575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.747622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.747907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.747975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.748252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.748317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.748549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.748615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.748936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.749002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.749287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.749353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.749570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.749626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.749910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.749977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.750221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.750287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.750576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.750643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.750932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.750999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.751192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.751258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.751482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.751548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.751847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.751916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.752144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.752209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.968 qpair failed and we were unable to recover it. 00:27:59.968 [2024-12-06 19:26:44.752477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.968 [2024-12-06 19:26:44.752525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.752699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.752774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.753071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.753119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.753351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.753399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.753659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.753707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.754007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.754073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.754360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.754427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.754705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.754768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.755059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.755107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.755358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.755423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.755701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.755763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.756049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.756117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.756410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.756477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.756749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.756798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.757050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.757115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.757399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.757464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.757719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.757778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.758067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.758134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.758438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.758504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.758735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.758784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.759006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.759074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.759303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.759370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.759667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.759741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.760024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.760072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.760354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.760421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.760673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.760743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.761037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.761104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.761381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.761448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.761719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.969 [2024-12-06 19:26:44.761781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.969 qpair failed and we were unable to recover it. 00:27:59.969 [2024-12-06 19:26:44.762056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.762104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.762339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.762406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.762576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.762650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.762889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.762938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.763204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.763271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.763528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.763593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.763877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.763946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.764175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.764242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.764492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.764557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.764856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.764923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.765178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.765246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.765510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.765575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.765852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.765918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.766172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.766241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.766506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.766572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.766844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.766909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.767197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.767263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.767517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.767565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.767830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.767898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.768163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.768228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.768460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.768526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.768702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.768782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.769068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.769131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.769391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.769458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.769743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.769793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.770074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.770139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.770418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.770485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.770756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.770805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.771089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.771154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.771394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.771460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.771741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.970 [2024-12-06 19:26:44.771791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.970 qpair failed and we were unable to recover it. 00:27:59.970 [2024-12-06 19:26:44.772065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.772113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.772341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.772406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.772681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.772743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.773016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.773064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.773351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.773416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.773668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.773716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.774018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.774066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.774332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.774396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.774625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.774673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.774961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.775010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.775279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.775344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.775633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.775705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.775958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.776007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.776217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.776287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.776579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.776644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.776948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.777015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.777285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.777350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.777617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.777665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.777949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.778017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.778304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.778370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.778639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.778688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.779006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.779072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.779363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.779429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.779660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.779708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.779990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.780058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.780352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.780418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.780669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.780717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.781025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.781093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.781322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.781386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.781661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.781709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.782021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.782088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.782392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.782457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.782741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.782789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.783078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.783144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.783346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.783411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.783687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.783747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.784021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.784069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.784337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.971 [2024-12-06 19:26:44.784403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.971 qpair failed and we were unable to recover it. 00:27:59.971 [2024-12-06 19:26:44.784699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.784774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.784963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.785011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.785253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.785319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.785577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.785642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.785901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.785950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.786217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.786282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.786549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.786613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.786866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.786916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.787211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.787276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.787562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.787628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.787901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.787969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.788190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.788256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.788555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.788622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.788838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.788912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.789193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.789258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.789505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.789571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.789860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.789928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.790168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.790234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.790467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.790534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.790786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.790836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.791124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.791189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.791466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.791531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.791783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.791851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.792082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.792146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.792378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.792445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.792766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.792814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.793084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.793148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.793450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.793516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.793810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.793885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.794179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.794247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.794483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.794550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.794825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.794901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.795221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.795293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.795553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.795602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.795831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.795897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.796130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.796201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.796516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.972 [2024-12-06 19:26:44.796591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.972 qpair failed and we were unable to recover it. 00:27:59.972 [2024-12-06 19:26:44.796876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.796943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.797235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.797302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.797591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.797642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.797934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.798001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.798272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.798338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.798598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.798655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.798951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.799018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.799312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.799379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.799642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.799691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.799990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.800058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.800347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.800414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.800665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.800714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.800900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.800969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.801262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.801327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.801617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.801682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.801968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.802036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.802291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.802375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.802649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.802696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.803015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.803085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.803388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.803463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.803699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.803761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.804035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.804102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.804396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.804475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.804787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.804858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.805159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.805225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.805503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.805580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.805843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.805894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.806207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.806273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.806550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.806616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.806893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.806948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.807252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.807318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.807587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.807636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.807935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.808004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.973 qpair failed and we were unable to recover it. 00:27:59.973 [2024-12-06 19:26:44.808294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.973 [2024-12-06 19:26:44.808367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.808637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.808686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.808939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.809005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.809285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.809352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.809568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.809618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.809878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.809946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.810232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.810309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.810602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.810670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.810967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.811040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.811321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.811391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.811656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.811707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.811950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.812019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.812318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.812385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.812654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.812704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.812999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.813066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.813371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.813438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.813674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.813745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.814003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.814074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.814329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.814397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.814664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.814712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.814956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.815024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.815253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.815319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.815619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.815687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.815992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.816067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.816362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.816429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.816705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.816769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.817021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.817069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.817346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.817414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.817692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.817756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.818041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.818089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.818390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.818458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.818708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.818773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.819050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.819097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.819340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.819416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.819697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.819761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.820045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.820093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.820377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.820452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.820748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.974 [2024-12-06 19:26:44.820799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.974 qpair failed and we were unable to recover it. 00:27:59.974 [2024-12-06 19:26:44.821054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.821102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.821340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.821408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.821645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.821693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.821982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.822030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.822314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.822392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.822637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.822686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.822968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.823018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.823299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.823365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.823623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.823672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.823972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.824042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.824303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.824351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.824622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.824691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.825017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.825086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.825368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.825436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.825706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.825778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.826025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.826099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.826352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.826417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.826621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.826675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.826963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.827012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.827256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.827323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.827566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.827637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.827934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.828003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.828234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.828316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.828607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.828674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.828978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.829043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.829333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.829413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.829690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.829755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.830046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.830113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.830423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.830497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.830741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.830792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.831060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.831127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.831407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.831485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.831799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.831872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.832161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.832227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.832486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.832555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.832827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.832877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.833159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.833226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.833523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.975 [2024-12-06 19:26:44.833595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.975 qpair failed and we were unable to recover it. 00:27:59.975 [2024-12-06 19:26:44.833865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.833923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.834206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.834254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.834534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.834601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.834883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.834951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.835149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.835218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.835448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.835524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.835733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.835783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.835945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.836016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.836291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.836360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.836615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.836664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.836891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.836958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.837145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.837211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.837424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.837498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.837678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.837741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.837971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.838021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.838288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.838336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.838545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.838603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.838809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.838884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.839201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.839267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.839552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.839609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.839802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.839873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.840109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.840180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.840408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.840475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.840675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.840744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.840987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.841055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.841281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.841348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.841563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.841611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.841813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.841890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.842074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.842149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.842373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.842439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.842629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.842678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.842875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.842943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.843165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.843240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.843422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.843471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.843658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.843707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.843882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.843931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.844137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.844187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.844402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.976 [2024-12-06 19:26:44.844451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.976 qpair failed and we were unable to recover it. 00:27:59.976 [2024-12-06 19:26:44.844634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.844683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.844864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.844914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.845123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.845171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.845393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.845444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.845626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.845674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.845903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.845969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.846157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.846223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.846444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.846495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.846684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.846752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.846946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.846995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.847178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.847227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.847455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.847504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.847691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.847769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.847991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.848039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.848250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.848307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.848542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.848592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.848824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.848892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.849120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.849185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.849468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.849537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.849825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.849893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.850112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.850179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.850382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.850450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.850679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.850741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.850915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.850992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.851204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.851253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.851454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.851503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.851659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.851708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.851883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.851932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.852119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.852169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.852383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.852439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.852663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.852713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.852898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.852948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.853110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.853159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.853334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.853382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.853564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.853616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.853818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.853868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.854083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.854132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.854271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.977 [2024-12-06 19:26:44.854321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.977 qpair failed and we were unable to recover it. 00:27:59.977 [2024-12-06 19:26:44.854520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.854581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.854798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.854847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.855033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.855082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.855290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.855339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.855542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.855602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.855822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.855891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.856116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.856184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.856361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.856409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.856552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.856611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.856812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.856863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.857067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.857135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.857351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.857399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.857603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.857664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.857905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.857979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.858179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.858247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.858451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.858500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.858670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.858747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.858915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.858988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.859170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.859241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.859465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.859514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.859754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.859816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.860040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.860107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.860301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.860369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.860560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.860607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.860809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.860895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.861094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.861161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.861334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.861383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.861572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.861620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.861821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.861892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.862111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.862181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.862351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.862399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.862603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.862671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.862912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.862964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.863139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.978 [2024-12-06 19:26:44.863191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.978 qpair failed and we were unable to recover it. 00:27:59.978 [2024-12-06 19:26:44.863396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.863444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.863633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.863684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.863970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ea570 is same with the state(6) to be set 00:27:59.979 [2024-12-06 19:26:44.864370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.864467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.864753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.864826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.865093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.865161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.865398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.865470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.865708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.865803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.865986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.866057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.866315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.866385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.866650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.866715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.866915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.866973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.867261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.867325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.867585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.867651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.867859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.867885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.867982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.868010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.868227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.868291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.868597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.868663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.868905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.868956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.869170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.869218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.869471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.869541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.869822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.869873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.870053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.870102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.870282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.870346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.870602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.870666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.870921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.870971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.871193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.871258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.871477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.871541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.871804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.871854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.872051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.872124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.872360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.872429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.872684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.872780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.872965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.873013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.873219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.873265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.873483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.873549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.873785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.873835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.873992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.874047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.874242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.874295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.874539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.979 [2024-12-06 19:26:44.874618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.979 qpair failed and we were unable to recover it. 00:27:59.979 [2024-12-06 19:26:44.874787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.874835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.875032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.875096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.875343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.875423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.875591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.875619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.875801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.875851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.876041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.876089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.876265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.876312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.876521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.876587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.876791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.876845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.877036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.877083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.877272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.877336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.877544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.877615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.877831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.877884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.878119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.878167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.878393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.878457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.878668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.878750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.878980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.879051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.879303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.879367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.879597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.879661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.879918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.879967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.880192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.880258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.880538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.880586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.880832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.880909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.881125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.881195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.881453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.881517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.881752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.881818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.882018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.882081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.882270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.882333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.882592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.882657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.882896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.882963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.883216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.883281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.883478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.883542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.883790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.883858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.884115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.884180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.884434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.884497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.884715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.884794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.884985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.885049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.885287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.885352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.980 [2024-12-06 19:26:44.885601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.980 [2024-12-06 19:26:44.885665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.980 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.885867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.885932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.886158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.886233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.886453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.886522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.886773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.886839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.887112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.887177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.887395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.887457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.887663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.887744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.887970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.888041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.888256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.888320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.888581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.888644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.888904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.888971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.891854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.891954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.892214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.892284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.892513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.892580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.892801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.892867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.893137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.893204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.893438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.893508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.893740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.893806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.894002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.894065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.894349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.894419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.894690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.894773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.894968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.895032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.895283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.895348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.895607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.895677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.895930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.895995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.896229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.896293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.896525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.896588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.896806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.896873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.897132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.897208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.897457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.897521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.897771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.897837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.898096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.898160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.898388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.898458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.898744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.898810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.899048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.899112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.899327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.899390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.899651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.899755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.899969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.900033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.981 qpair failed and we were unable to recover it. 00:27:59.981 [2024-12-06 19:26:44.900287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.981 [2024-12-06 19:26:44.900351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.900590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.900653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.900865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.900931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.901154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.901225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.901487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.901551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.901785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.901850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.902062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.902125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.902384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.902447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.902678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.902757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.902951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.903014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.903232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.903296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.903555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.903619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.903839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.903909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.904149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.904212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.904459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.904522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.904771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.904842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.905052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.905115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.905374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.905439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.905637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.905701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.905936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.906000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.906315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.906384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.906695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.906778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.906974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.907037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.907258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.907322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.907571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.907637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.907849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.907916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.908139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.908203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.908444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.908507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.908757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.908822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.909063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.909128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.909383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.909447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.909715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.909804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.909997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.910061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.910305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.910368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.910588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.982 [2024-12-06 19:26:44.910653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.982 qpair failed and we were unable to recover it. 00:27:59.982 [2024-12-06 19:26:44.910863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.910928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.911142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.911204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.911451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.911521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.911754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.911821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.912084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.912146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.912384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.912448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.912643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.912707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.912958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.913024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.913247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.913311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.913523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.913587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.913790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.913857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.914094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.914157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.914386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.914456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.914660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.914740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.914954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.915017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.915274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.915338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.915557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.915624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.915864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.915929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.916158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.916223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.916437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.916501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.916693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.916772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.916991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.917062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.917312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.917382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.917612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.917686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.917938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.918007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.918231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.918294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.918535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.918599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.918808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.918874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.919121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.919184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.919441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.919504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.919772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.919838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.920024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.920087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.920338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.920401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.920622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.920686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.920892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.920956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.921195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.921258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.921518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.921583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.921808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.983 [2024-12-06 19:26:44.921873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.983 qpair failed and we were unable to recover it. 00:27:59.983 [2024-12-06 19:26:44.922080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.922144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.922388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.922451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.922660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.922736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.922928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.922990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.923234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.923297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.923548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.923612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.923808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.923872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.924115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.924180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.924464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.924527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.924784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.924848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.925046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.925110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.925346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.925410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.925638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.925711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.925930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.925994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.926234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.926299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.926509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.926572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.926767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.926832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.927044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.927107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.927316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.927378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.927592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.927656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.927861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.927925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.928139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.928204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.928432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.928496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.928752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.928816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.929040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.929104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.929352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.929415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.929612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.929675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.929891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.929954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.930153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.930217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.930404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.930468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.930680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.930757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.930963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.931027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.931261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.931324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.931536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.931599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.931795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.931860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.932081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.932145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.932379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.932443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.932655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.932719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.984 qpair failed and we were unable to recover it. 00:27:59.984 [2024-12-06 19:26:44.932923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.984 [2024-12-06 19:26:44.932987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.933235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.933315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.933565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.933628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.933826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.933890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.934093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.934157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.934373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.934436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.934678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.934757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.934949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.935013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.935230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.935293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.935500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.935564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.935748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.935814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.936007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.936071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.936323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.936387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.936626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.936688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.936925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.936989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.937239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.937304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.937526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.937590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.937814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.937880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.938088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.938152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.938398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.938461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.938671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.938747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.938933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.938996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.939179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.939242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.939439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.939502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.939756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.939821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.940037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.940101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.940346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.940409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.940600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.940664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.940908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.940973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.941203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.941269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.941473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.941535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.941749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.941816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.942014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.942078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.942311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.942373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.942581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.942644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.942857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.942922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.943132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.943194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.943434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.943498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.943755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.985 [2024-12-06 19:26:44.943822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.985 qpair failed and we were unable to recover it. 00:27:59.985 [2024-12-06 19:26:44.944060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.944124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.944354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.944418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.944636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.944700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.944907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.944971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.945155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.945218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.945478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.945540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.945812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.945877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.946133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.946198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.946423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.946486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.946743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.946807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.947026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.947089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.947339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.947402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.947656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.947736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.947926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.947993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.948262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.948326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.948562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.948626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.948834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.948900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.949164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.949228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.949446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.949509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.949754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.949826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.950077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.950141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.950376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.950440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.950647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.950711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.950930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.951001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.951235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.951299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.951548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.951611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.951834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.951898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.952138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.952201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.952420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.952484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.952764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.952829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.953031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.953105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.953320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.953384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.953606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.953669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.953876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.953940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.954157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.954220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.954473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.954537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.954828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.954894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.955143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.986 [2024-12-06 19:26:44.955206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.986 qpair failed and we were unable to recover it. 00:27:59.986 [2024-12-06 19:26:44.955432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.955495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.955705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.955792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.955968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.956033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.956307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.956371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.956589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.956653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.956866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.956930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.957208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.957272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.957485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.957548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.957783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.957848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.958080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.958143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.958353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.958416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.958630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.958694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.958894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.958957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.959200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.959264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.959501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.959565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.959803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.959867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.960112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.960175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.960420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.960483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.960704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.960787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.960961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.961034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.961278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.961342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.961563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.961626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.961975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.962040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.962277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.962341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.962584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.962647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.962855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.962920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.963141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.963204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.963463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.963538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.963787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.963851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.964108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.964171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.964415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.964478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.964663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.964758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.964942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.965005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.965277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.987 [2024-12-06 19:26:44.965342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.987 qpair failed and we were unable to recover it. 00:27:59.987 [2024-12-06 19:26:44.965592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.965654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.965857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.965921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.966166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.966230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.966476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.966539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.966758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.966823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.967048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.967113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.967324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.967386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.967562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.967625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.967862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.967927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.968150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.968212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.968423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.968486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.968778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.968844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.969076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.969139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.969340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.969404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.969648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.969712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.969939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.970002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.970215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.970279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.970572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.970636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.970877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.970940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.971184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.971247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.971470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.971534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.971750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.971814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.972060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.972123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.972378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.972443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.972682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.972760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.972988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.973052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.973324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.973390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.973606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.973669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.973902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.973966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.974224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.974287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.974509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.974572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.974782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.974847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.975102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.975167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.975415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.975479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.975692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.975769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.976009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.976072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.976319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.976382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.976623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.976687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.988 [2024-12-06 19:26:44.976920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.988 [2024-12-06 19:26:44.976983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.988 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.977169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.977232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.977493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.977556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.977751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.977823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.978040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.978104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.978342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.978405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.978641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.978704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.978942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.979006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.979255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.979317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.979580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.979644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.979852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.979917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.980207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.980270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.980502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.980577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.980767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.980836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.981028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.981091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.981308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.981382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.981608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.981672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.981902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.981966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.982229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.982292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.982546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.982609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.982797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.982861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.983133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.983197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.983444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.983507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.983719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.983804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.984025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.984089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.984338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.984400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.984627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.984690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.984916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.984980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.985310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.985372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.985632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.985696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.985916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.985980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.986290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.986351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.986589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.986651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.986923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.986991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.987247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.987310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.987614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.987678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.987924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.987986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.988304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.988367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.988634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.988697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.989 qpair failed and we were unable to recover it. 00:27:59.989 [2024-12-06 19:26:44.988910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.989 [2024-12-06 19:26:44.988972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.989255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.989317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.989580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.989643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.989900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.989975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.990197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.990261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.990554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.990617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.990841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.990905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.991200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.991264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.991558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.991622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.991845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.991908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.992126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.992189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.992415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.992479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.992781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.992847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.993057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.993121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.993442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.993505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.993773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.993839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.994083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.994146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.994431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.994494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.994811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.994876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.995105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.995169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.995399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.995461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.995666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.995741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.995981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.996045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.996332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.996394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.996658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.996753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.996958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.997021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:27:59.990 qpair failed and we were unable to recover it. 00:27:59.990 [2024-12-06 19:26:44.997260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.990 [2024-12-06 19:26:44.997323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:44.997579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:44.997644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:44.997917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:44.997982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:44.998217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:44.998280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:44.998478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:44.998551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:44.998810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:44.998875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:44.999110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:44.999174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:44.999462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:44.999525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:44.999778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:44.999842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:45.000106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:45.000171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:45.000401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:45.000464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:45.000715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:45.000807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:45.001061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:45.001124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:45.001483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:45.001547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:45.001793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:45.001858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:45.002095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:45.002157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:45.002503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:45.002566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:45.002812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:45.002878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:45.003166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:45.003229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:45.003475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:45.003539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.263 qpair failed and we were unable to recover it. 00:28:00.263 [2024-12-06 19:26:45.003792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.263 [2024-12-06 19:26:45.003857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.004213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.004277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.004594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.004657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.004898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.004962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.005309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.005372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.005671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.005751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.005961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.006024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.006350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.006413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.006676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.006757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.007033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.007096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.007408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.007472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.007741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.007806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.008004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.008066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.008254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.008318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.008589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.008653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.008912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.008977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.009205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.009269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.009527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.009589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.009810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.009876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.010160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.010224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.010519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.010582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.010839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.010905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.011167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.011231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.011481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.011543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.011816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.011880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.012190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.012255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.012533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.012596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.012838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.012903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.013181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.013245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.013573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.013636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.013902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.013966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.014243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.014307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.014587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.014651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.014941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.015006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.015258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.015321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.015565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.015628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.264 [2024-12-06 19:26:45.015916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.264 [2024-12-06 19:26:45.015980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.264 qpair failed and we were unable to recover it. 00:28:00.265 [2024-12-06 19:26:45.016270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.265 [2024-12-06 19:26:45.016333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.265 qpair failed and we were unable to recover it. 00:28:00.265 [2024-12-06 19:26:45.016621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.265 [2024-12-06 19:26:45.016685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.265 qpair failed and we were unable to recover it. 00:28:00.265 [2024-12-06 19:26:45.016973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.265 [2024-12-06 19:26:45.017036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.265 qpair failed and we were unable to recover it. 00:28:00.265 [2024-12-06 19:26:45.017307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.265 [2024-12-06 19:26:45.017370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.017622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.017686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.017912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.017974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.018249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.018312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.018603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.018668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.018950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.019015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.019276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.019340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.019640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.019704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.020016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.020079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.020327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.020391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.020654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.020718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.021002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.021065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.021296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.021370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.021660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.021743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.022015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.022077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.022339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.022402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.022706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.022786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.023069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.023132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.023395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.023459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.023745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.023810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.024096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.024159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.024413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.024476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.024770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.024836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.025145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.025209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.025470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.025533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.025823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.025888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.026215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.026279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.026530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.026593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.026841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.026906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.027205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.027269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.027576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.027639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.277 qpair failed and we were unable to recover it. 00:28:00.277 [2024-12-06 19:26:45.027910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.277 [2024-12-06 19:26:45.027974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.028272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.028335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.028594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.028658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.028960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.029025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.029321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.029384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.029599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.029664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.030001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.030065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.030340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.030403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.030713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.030806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.031145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.031208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.031497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.031560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.031841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.031906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.032201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.032265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.032579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.032643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.032962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.033028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.033348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.033410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.033708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.033791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.034061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.034124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.034369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.034432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.034687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.034768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.035028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.035091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.035397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.035460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.035787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.035853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.036154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.036216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.036508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.036572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.036895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.036961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.037235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.037298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.037561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.037624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.037942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.038006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.038287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.038350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.038661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.038739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.039056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.039118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.039327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.039390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.039705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.039786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.040048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.040110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.040410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.040474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.040820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.040887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.041190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.278 [2024-12-06 19:26:45.041253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.278 qpair failed and we were unable to recover it. 00:28:00.278 [2024-12-06 19:26:45.041514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.041577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.041879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.041945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.042193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.042253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.042489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.042549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.042880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.042944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.043255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.043317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.043627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.043691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.044029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.044093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.044318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.044387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.044661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.044759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.045039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.045104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.045434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.045499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.045797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.045867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.046153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.046217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.046539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.046602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.046898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.046963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.047288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.047354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.047619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.047682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.047941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.048006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.048243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.048312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.048631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.048696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.049029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.049093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.049386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.049448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.049764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.049830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.050124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.050188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.050522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.050584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.050900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.050966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.051231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.051296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.051556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.051619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.051910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.051974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.052286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.052350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.052650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.052713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.053050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.053113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.053385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.053450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.053769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.053840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.054127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.054189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.054494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.054558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.054871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.279 [2024-12-06 19:26:45.054937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.279 qpair failed and we were unable to recover it. 00:28:00.279 [2024-12-06 19:26:45.055265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.055340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.055646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.055709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.056038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.056102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.056418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.056482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.056782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.056848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.057089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.057153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.057444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.057514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.057829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.057897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.058197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.058261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.058580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.058644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.058966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.059037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.059349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.059414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.059719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.059803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.060110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.060174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.060463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.060528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.060830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.060895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.061211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.061274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.061529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.061593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.061940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.062007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.062334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.062396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.062702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.062792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.063118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.063188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.063455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.063518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.063793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.063858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.064186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.064251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.064562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.064626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.064954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.065019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.065319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.065401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.065752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.065822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.066079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.066142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.066449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.066513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.066807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.066874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.280 [2024-12-06 19:26:45.067137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.280 [2024-12-06 19:26:45.067201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.280 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.067511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.067575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.067883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.067950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.068234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.068305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.068562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.068627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.068964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.069028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.069302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.069372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.069652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.069745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.070078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.070142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.070457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.070521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.070758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.070828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.071130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.071196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.071501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.071563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.071843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.071914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.072181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.072246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.072556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.072620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.072962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.073027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.073235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.073306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.073621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.073687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.074009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.074073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.074346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.074410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.074744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.074809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.075130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.075205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.075510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.075574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.075886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.075953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.076272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.076337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.076577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.076639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.076912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.076977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.077260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.077325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.077638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.077702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.078039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.078104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.078411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.078475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.078803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.078872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.079177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.079241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.079543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.079607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.079903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.079968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.080252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.080317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.080575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.080640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.281 [2024-12-06 19:26:45.080987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.281 [2024-12-06 19:26:45.081053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.281 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.081362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.081427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.081756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.081821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.082107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.082170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.082475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.082540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.082871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.082936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.083223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.083287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.083602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.083666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.083976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.084041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.084364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.084428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.084752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.084818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.085130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.085194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.085525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.085591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.085891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.085958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.086238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.086301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.086604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.086668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.086996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.087061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.087320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.087384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.087669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.087756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.088081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.088144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.088403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.088467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.088781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.088847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.089164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.089228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.089549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.089613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.089943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.090009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.090328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.090395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.090716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.090812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.091065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.091129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.091404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.091474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.091791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.091862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.092172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.092236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.092548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.092610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.092934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.093006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.093316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.093381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.093680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.093762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.094093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.094164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.094466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.094535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.094851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.094917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.095227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.282 [2024-12-06 19:26:45.095291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.282 qpair failed and we were unable to recover it. 00:28:00.282 [2024-12-06 19:26:45.095610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.095674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.095952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.096016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.096279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.096343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.096632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.096695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.097053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.097123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.097383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.097447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.097754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.097820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.098139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.098203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.098485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.098553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.098855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.098920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.099221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.099285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.099578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.099642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.099924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.099988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.100286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.100360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.100669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.100746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.101016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.101080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.101341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.101405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.101710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.101787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.102045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.102109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.102440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.102504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.102817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.102883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.103185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.103248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.103513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.103577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.103892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.103957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.104258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.104321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.104619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.104684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.105030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.105094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.105402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.105465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.105748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.105813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.106156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.106219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.106543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.106607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.106916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.106982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.107172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.107236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.107418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.107482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.107775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.107839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.108159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.108222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.108528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.108592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.108869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.108944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.283 [2024-12-06 19:26:45.109251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.283 [2024-12-06 19:26:45.109315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.283 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.109576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.109639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.109918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.109999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.110319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.110384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.110690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.110772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.111081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.111144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.111468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.111533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.111805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.111872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.112157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.112220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.112540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.112604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.112902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.112969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.113252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.113315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.113587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.113651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.113985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.114051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.114363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.114426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.114710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.114788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.115121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.115186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.115464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.115532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.115839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.115905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.116202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.116265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.116570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.116634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.116990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.117056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.117350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.117413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.117637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.117701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.118056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.118120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.118416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.118479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.118751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.118816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.119116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.119181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.119492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.119557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.119839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.119906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.120234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.120303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.120623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.120692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.121006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.121070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.121378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.121440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.121757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.284 [2024-12-06 19:26:45.121823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.284 qpair failed and we were unable to recover it. 00:28:00.284 [2024-12-06 19:26:45.122106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.122171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.122444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.122506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.122822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.122890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.123180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.123245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.123595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.123657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.123960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.124024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.124334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.124398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.124669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.124763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.125073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.125138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.125390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.125459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.125804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.125869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.126127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.126189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.126482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.126545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.126858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.126923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.127217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.127279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.127553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.127616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.127919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.127984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.128280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.128343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.128624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.128687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.129023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.129087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.129341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.129403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.129604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.129667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.130014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.130079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.130295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.130358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.130654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.130717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.131055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.131119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.131383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.131445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.131764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.131828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.132138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.132202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.132503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.132565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.132872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.132937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.133243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.133307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.133613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.133676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.133948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.134011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.134325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.134388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.134677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.134767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.135058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.135121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.135439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.135503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.285 [2024-12-06 19:26:45.135823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.285 [2024-12-06 19:26:45.135889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.285 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.136209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.136272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.136577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.136640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.136914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.136979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.137270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.137333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.137601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.137664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.137969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.138034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.138327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.138390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.138648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.138711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.139040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.139104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.139416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.139479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.139795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.139862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.140160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.140222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.140516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.140578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.140882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.140947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.141246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.141309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.141600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.141664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.141951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.142015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.142310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.142373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.142696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.142775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.143089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.143151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.143399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.143462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.143757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.143823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.144140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.144203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.144494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.144567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.144867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.144932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.145191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.145255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.145535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.145597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.145842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.145907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.146165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.146229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.146534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.146597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.146898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.146963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.147284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.147348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.147649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.147711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.147938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.148002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.148315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.148378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.148642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.148705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.149045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.149108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.149434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.286 [2024-12-06 19:26:45.149497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.286 qpair failed and we were unable to recover it. 00:28:00.286 [2024-12-06 19:26:45.149803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.149868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.150116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.150178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.150542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.150604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.150815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.150880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.151191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.151254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.151553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.151616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.151933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.151998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.152290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.152354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.152661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.152754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.153079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.153141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.153437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.153500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.153819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.153885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.154183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.154255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.154523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.154586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.154894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.154960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.155264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.155326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.155636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.155700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.155991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.156055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.156368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.156430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.156740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.156804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.157125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.157189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.157494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.157557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.157852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.157917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.158219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.158283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.158593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.158656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.158982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.159046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.159372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.159437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.159751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.159815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.160139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.160203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.160523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.160587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.160875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.160940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.161211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.161275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.161582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.161645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.161954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.162019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.162316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.162380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.162674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.162754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.163016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.163079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.163290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.287 [2024-12-06 19:26:45.163354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.287 qpair failed and we were unable to recover it. 00:28:00.287 [2024-12-06 19:26:45.163626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.163688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.164013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.164077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.164389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.164453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.164759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.164825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.165044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.165107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.165415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.165477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.165748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.165812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.166059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.166122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.166424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.166486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.166779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.166844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.167108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.167171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.167370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.167433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.167676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.167754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.168012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.168076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.168374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.168436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.168774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.168850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.169120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.169184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.169491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.169554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.169821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.169885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.170187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.170251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.170552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.170614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.170939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.171003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.171297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.171360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.171658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.171736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.172002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.172065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.172361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.172425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.172678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.172776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.173094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.173158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.173448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.173511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.173825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.173890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.174216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.174280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.174580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.174643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.174969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.175033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.175321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.175383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.175634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.175697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.176022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.176086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.176402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.176466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.176741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.176805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.288 [2024-12-06 19:26:45.177107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.288 [2024-12-06 19:26:45.177169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.288 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.177473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.177536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.177796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.177860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.178103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.178166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.178476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.178551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.178858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.178922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.179244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.179307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.179623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.179688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.179963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.180027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.180329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.180392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.180693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.180786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.181083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.181146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.181437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.181500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.181815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.181880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.182179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.182242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.182539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.182602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.182917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.182981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.183296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.183360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.183682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.183759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.184078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.184141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.184435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.184498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.184790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.184855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.185170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.185233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.185495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.185557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.185819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.185884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.186194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.186257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.186509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.186572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.186841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.186906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.187189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.187251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.187466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.187530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.187790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.187854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.188166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.188239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.188537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.188601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.188896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.188961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.189271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.289 [2024-12-06 19:26:45.189333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.289 qpair failed and we were unable to recover it. 00:28:00.289 [2024-12-06 19:26:45.189619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.189682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.189973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.190037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.190345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.190407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.190665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.190747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.191030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.191094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.191406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.191468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.191772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.191837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.192116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.192179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.192504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.192567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.192862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.192926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.193241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.193305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.193613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.193677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.193999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.194062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.194325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.194388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.194647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.194711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.195053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.195117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.195374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.195437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.195759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.195826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.196098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.196161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.196409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.196472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.196751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.196816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.197122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.197186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.197501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.197565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.197896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.197960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.198278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.198342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.198602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.198665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.198976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.199040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.199337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.199401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.199644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.199708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.200045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.200109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.200404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.200468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.200773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.200838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.201106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.201168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.201467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.201531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.201821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.201887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.202147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.202210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.202512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.202575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.202851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.290 [2024-12-06 19:26:45.202917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.290 qpair failed and we were unable to recover it. 00:28:00.290 [2024-12-06 19:26:45.203230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.203292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.203598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.203661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.203981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.204045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.204356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.204419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.204739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.204804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.205102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.205166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.205475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.205538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.205843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.205908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.206199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.206263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.206521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.206583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.206847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.206913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.207187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.207250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.207572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.207634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.207966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.208031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.208335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.208399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.208705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.208795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.209090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.209154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.209455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.209517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.209824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.209889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.210203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.210266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.210565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.210628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.210953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.211018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.211320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.211382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.211671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.211747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.212059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.212123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.212430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.212493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.212788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.212869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.213190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.213254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.213476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.213539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.213797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.213863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.214191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.214254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.214517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.214581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.214859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.214924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.215243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.215305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.215594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.215656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.215935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.215999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.216319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.216382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.216687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.216764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.217064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.291 [2024-12-06 19:26:45.217128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.291 qpair failed and we were unable to recover it. 00:28:00.291 [2024-12-06 19:26:45.217416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.217478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.217759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.217824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.218134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.218197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.218509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.218572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.218891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.218955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.219269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.219333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.219635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.219698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.219977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.220040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.220285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.220348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.220649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.220711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.221049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.221111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.221408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.221471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.221790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.221855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.222149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.222213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.222516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.222589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.222893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.222959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.223268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.223331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.223633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.223696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.223986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.224051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.224349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.224412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.224709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.224799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.225106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.225169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.225473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.225537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.225852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.225917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.226227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.226289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.226591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.226654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.226950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.227015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.227276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.227339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.227645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.227708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.228047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.228111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.228433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.228495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.228805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.228869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.229182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.229245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.229559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.229623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.229939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.230003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.230284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.230347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.230661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.230742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.231049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.231112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.292 qpair failed and we were unable to recover it. 00:28:00.292 [2024-12-06 19:26:45.231407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.292 [2024-12-06 19:26:45.231469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.231774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.231840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.232117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.232179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.232430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.232504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.232805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.232871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.233103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.233166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.233375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.233438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.233656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.233739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.233988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.234052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.234269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.234332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.234519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.234582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.234805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.234871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.235158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.235221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.235535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.235597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.235846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.235911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.236240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.236303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.236559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.236622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.236903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.236968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.237212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.237276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.237521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.237583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.237831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.237895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.238112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.238174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.238368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.238431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.238670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.238749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.239001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.239064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.239259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.239323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.239557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.239620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.239860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.239924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.240131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.240194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.240401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.240464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.240705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.240787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.241046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.241111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.241315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.241378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.241614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.241676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.241949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.293 [2024-12-06 19:26:45.242013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.293 qpair failed and we were unable to recover it. 00:28:00.293 [2024-12-06 19:26:45.242223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.242286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.242493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.242555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.242819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.242885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.243127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.243191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.243399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.243461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.243702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.243781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.243996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.244060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.244259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.244322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.244531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.244595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.244847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.244922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.245099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.245162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.245410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.245473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.245680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.245771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.245962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.246025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.246243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.246306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.246548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.246612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.246874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.246939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.247146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.247209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.247453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.247517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.247744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.247809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.248048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.248111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.248349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.248412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.248657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.248757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.249015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.249079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.249311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.249373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.249577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.249641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.249903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.249968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.250195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.250257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.250463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.250526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.250749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.250814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.251043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.251105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.251315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.251380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.251617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.251680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.251937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.252000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.252200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.252263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.252476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.252539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.252772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.294 [2024-12-06 19:26:45.252847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.294 qpair failed and we were unable to recover it. 00:28:00.294 [2024-12-06 19:26:45.253064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.253128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.253333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.253396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.253628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.253690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.253935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.253999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.254242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.254306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.254506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.254569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.254768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.254833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.255077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.255141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.255380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.255443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.255638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.255701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.255930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.255994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.256221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.256284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.256521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.256584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.256816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.256882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.257051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.257114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.257343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.257407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.257653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.257717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.257964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.258027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.258264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.258327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.258537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.258601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.258845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.258910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.259138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.259201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.259438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.259503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.259747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.259811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.260044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.260108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.260327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.260391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.260595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.260668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.260929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.260995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.261239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.261303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.261528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.261590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.261790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.261855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.262116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.262181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.262386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.262448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.262675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.262752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.262978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.263041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.263271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.263334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.263568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.263632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.263889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.295 [2024-12-06 19:26:45.263953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.295 qpair failed and we were unable to recover it. 00:28:00.295 [2024-12-06 19:26:45.264168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.264231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.264459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.264522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.264787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.264852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.265056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.265119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.265320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.265383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.265555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.265618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.265845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.265910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.266115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.266178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.266418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.266481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.266716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.266799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.267009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.267073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.267263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.267326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.267560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.267624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.267880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.267945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.268184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.268247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.268445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.268508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.268739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.268805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.269044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.269106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.269340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.269403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.269593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.269656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.269887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.269952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.270182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.270245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.270473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.270535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.270770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.270835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.271051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.271115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.271319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.271382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.271592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.271656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.271876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.271940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.272178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.272241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.272461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.272526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.272754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.272819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.273049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.273113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.273360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.273423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.273622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.273686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.273946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.274010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.274266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.274329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.274555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.274619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.274842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.274907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.275153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.296 [2024-12-06 19:26:45.275215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.296 qpair failed and we were unable to recover it. 00:28:00.296 [2024-12-06 19:26:45.275394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.275458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.275687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.275771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.276014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.276078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.276256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.276319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.276613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.276675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.276960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.277024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.277328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.277390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.277699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.277791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.278040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.278104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.278372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.278434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.278748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.278812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.279118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.279181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.279455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.279518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.279820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.279885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.280184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.280247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.280552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.280616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.280874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.280938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.281194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.281268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.281548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.281612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.281871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.281936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.282174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.282237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.282541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.282604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.282879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.282945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.283263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.283325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.283639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.283702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.283973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.284037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.284333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.284396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.284658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.284759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.285014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.285078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.285317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.285381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.285680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.285763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.286019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.286083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.286356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.286420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.286668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.286750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.286952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.287016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.287280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.287343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.287653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.287717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.287945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.288008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.297 [2024-12-06 19:26:45.288318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.297 [2024-12-06 19:26:45.288381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.297 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.288629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.288692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.288966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.289029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.289317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.289380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.289637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.289701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.289957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.290020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.290283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.290358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.290560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.290624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.290879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.290944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.291131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.291195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.291426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.291490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.291763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.291828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.292086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.292150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.292481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.292543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.292782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.292847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.293142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.293205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.293470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.293533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.293797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.293861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.294138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.294202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.294483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.294549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.294829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.294894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.295153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.295217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.295449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.295515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.295751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.295817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.296146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.296214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.296512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.296575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.296836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.296900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.297225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.297289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.297566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.297638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.298 [2024-12-06 19:26:45.297914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.298 [2024-12-06 19:26:45.297979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.298 qpair failed and we were unable to recover it. 00:28:00.575 [2024-12-06 19:26:45.298292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.575 [2024-12-06 19:26:45.298356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.575 qpair failed and we were unable to recover it. 00:28:00.575 [2024-12-06 19:26:45.298673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.575 [2024-12-06 19:26:45.298758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.575 qpair failed and we were unable to recover it. 00:28:00.575 [2024-12-06 19:26:45.299043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.575 [2024-12-06 19:26:45.299113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.575 qpair failed and we were unable to recover it. 00:28:00.575 [2024-12-06 19:26:45.299416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.575 [2024-12-06 19:26:45.299492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.575 qpair failed and we were unable to recover it. 00:28:00.575 [2024-12-06 19:26:45.299808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.575 [2024-12-06 19:26:45.299877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.575 qpair failed and we were unable to recover it. 00:28:00.575 [2024-12-06 19:26:45.300158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.575 [2024-12-06 19:26:45.300222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.575 qpair failed and we were unable to recover it. 00:28:00.575 [2024-12-06 19:26:45.300526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.575 [2024-12-06 19:26:45.300590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.575 qpair failed and we were unable to recover it. 00:28:00.575 [2024-12-06 19:26:45.300866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.575 [2024-12-06 19:26:45.300931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.575 qpair failed and we were unable to recover it. 00:28:00.575 [2024-12-06 19:26:45.301191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.575 [2024-12-06 19:26:45.301261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.575 qpair failed and we were unable to recover it. 00:28:00.575 [2024-12-06 19:26:45.301571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.575 [2024-12-06 19:26:45.301636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.575 qpair failed and we were unable to recover it. 00:28:00.575 [2024-12-06 19:26:45.301918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.575 [2024-12-06 19:26:45.301984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.575 qpair failed and we were unable to recover it. 00:28:00.575 [2024-12-06 19:26:45.302290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.575 [2024-12-06 19:26:45.302352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.575 qpair failed and we were unable to recover it. 00:28:00.575 [2024-12-06 19:26:45.302652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.575 [2024-12-06 19:26:45.302738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.575 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.302966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.303031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.303300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.303362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.303691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.303771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.304077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.304142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.304468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.304532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.304808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.304873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.305185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.305252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.305566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.305630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.305889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.305955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.306164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.306228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.306472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.306536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.306787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.306860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.307077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.307141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.307386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.307448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.307693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.307775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.308042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.308107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.308323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.308386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.308632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.308697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.308950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.309027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.309268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.309331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.309570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.309643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.309910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.309976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.310297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.310360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.310626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.310697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.310953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.311018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.311275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.311340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.311596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.311660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.311894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.311959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.312208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.312277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.312510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.312573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.312816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.312882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.313132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.313206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.313454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.313518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.313786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.313853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.314065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.314128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.314349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.314412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.314651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.314752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.315002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.315067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.315282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.315346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.315532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.315595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.315878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.315944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.316208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.316274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.576 [2024-12-06 19:26:45.316456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.576 [2024-12-06 19:26:45.316519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.576 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.316770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.316837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.317078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.317141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.317358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.317429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.317644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.317709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.317946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.318009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.318214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.318277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.318465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.318533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.318770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.318836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.319054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.319119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.319334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.319397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.319588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.319652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.319881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.319947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.320154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.320217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.320476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.320540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.320758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.320829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.321060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.321134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.321358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.321429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.321668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.321748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.321956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.322018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.322259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.322323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.322527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.322592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.322812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.322882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.323126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.323190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.323422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.323485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.323714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.323802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.324047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.324111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.324329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.324393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.324600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.324665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.324898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.324963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.325214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.325279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.325525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.325588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.325804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.325871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.326075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.326138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.326368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.326437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.326681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.326761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.326980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.327045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.327243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.327306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.327551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.327615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.327839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.327908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.328127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.328191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.328428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.328492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.328742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.328807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.329020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.329093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.577 qpair failed and we were unable to recover it. 00:28:00.577 [2024-12-06 19:26:45.329320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.577 [2024-12-06 19:26:45.329385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.329620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.329684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.329842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.329867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.330040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.330104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.330350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.330420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.330657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.330748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.330971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.331035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.331265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.331329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.331565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.331634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.331901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.331968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.332165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.332227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.332467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.332538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.332758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.332824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.333047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.333111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.333294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.333357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.333560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.333623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.333834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.333906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.334146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.334210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.334457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.334521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.334753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.334818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.335023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.335086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.335331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.335402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.335645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.335708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.335929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.336000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.336226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.336290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.336520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.336583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.336827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.336893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.337145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.337210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.337414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.337476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.337679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.337757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.338018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.338084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.338321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.338384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.338587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.338651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.338878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.338944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.339124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.339186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.339424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.339488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.339756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.339821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.340062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.340126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.340337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.340399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.340645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.340712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.341016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.341151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.341387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.341457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.578 [2024-12-06 19:26:45.341678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.578 [2024-12-06 19:26:45.341764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.578 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.341973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.342008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.342152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.342186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.342341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.342376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.342553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.342591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.342746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.342783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.342932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.342968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.343110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.343143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.343286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.343321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.343459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.343531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.343776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.343812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.343954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.343988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.344190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.344256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.344488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.344553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.344790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.344825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.344971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.345005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.345213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.345278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.345502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.345567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.345819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.345855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.345975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.346009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.346240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.346304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.346550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.346615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.346822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.346857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.347045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.347110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.347320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.347385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.347637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.347702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.347903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.347937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.348120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.348185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.348425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.348490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.348746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.348799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.348950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.348984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.349203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.349268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.349484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.349550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.349775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.349810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.349957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.349990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.350197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.579 [2024-12-06 19:26:45.350262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.579 qpair failed and we were unable to recover it. 00:28:00.579 [2024-12-06 19:26:45.350504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.350569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.350809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.350843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.350974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.351043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.351354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.351419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.351749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.351814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.351937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.351971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.352253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.352318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.352597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.352661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.352883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.352909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.353068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.353134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.353355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.353418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.353658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.353743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.353908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.353942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.354139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.354203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.354451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.354515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.354807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.354842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.354965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.354999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.355248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.355312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.355637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.355702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.355908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.355942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.356095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.356166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.356468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.356534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.356805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.356841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.356988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.357043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.357300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.357365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.357552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.357617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.357837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.357871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.358055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.358120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.358441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.358506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.358782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.358817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.358939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.358973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.359166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.359230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.359529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.359594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.359850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.359885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.360025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.360099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.360408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.360473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.360752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.360802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.360914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.360947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.361073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.361138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.361465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.361530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.361780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.361845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.362091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.362155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.580 [2024-12-06 19:26:45.362467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.580 [2024-12-06 19:26:45.362543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.580 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.362836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.362903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.363121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.363186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.363457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.363523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.363834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.363900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.364218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.364282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.364630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.364696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.364935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.365000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.365211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.365276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.365606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.365672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.365933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.365998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.366241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.366306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.366552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.366618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.366846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.366912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.367122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.367187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.367404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.367469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.367738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.367804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.368112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.368176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.368399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.368465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.368782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.368849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.369127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.369191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.369479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.369543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.369795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.369861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.370081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.370146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.370420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.370486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.370794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.370861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.371094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.371159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.371425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.371491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.371804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.371870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.372178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.372243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.372548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.372613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.372900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.372966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.373247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.373311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.373579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.373644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.373941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.374006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.374340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.374404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.374716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.374796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.375081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.375145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.375394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.375459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.375742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.375808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.376059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.376134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.376410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.376476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.376776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.376844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.377151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.581 [2024-12-06 19:26:45.377217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.581 qpair failed and we were unable to recover it. 00:28:00.581 [2024-12-06 19:26:45.377483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.377548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.377843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.377910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.378210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.378276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.378577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.378641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.378954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.379022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.379339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.379404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.379711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.379804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.380111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.380177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.380513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.380579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.380887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.380957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.381228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.381294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.381477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.381542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.381792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.381859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.382092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.382157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.382369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.382442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.382751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.382819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.383102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.383168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.383400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.383466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.383685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.383769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.384036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.384101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.384409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.384474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.384783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.384849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.385149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.385214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.385511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.385577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.385820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.385887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.386195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.386260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.386567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.386632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.386854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.386888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.387048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.387082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.387375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.387440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.387766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.387833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.388136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.388202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.388460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.388525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.388807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.388873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.389226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.389291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.389607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.389672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.389911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.389988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.390303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.390368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.390656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.390740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.391000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.391065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.391351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.391417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.391749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.391817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.582 [2024-12-06 19:26:45.392080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.582 [2024-12-06 19:26:45.392144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.582 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.392386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.392451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.392707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.392812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.393106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.393170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.393483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.393548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.393829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.393897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.394127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.394191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.394472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.394537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.394825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.394893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.395206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.395271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.395555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.395620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.395842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.395909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.396087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.396151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.396467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.396532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.396796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.396863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.397177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.397242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.397514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.397579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.397864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.397931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.398249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.398314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.398579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.398645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.398873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.398938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.399223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.399289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.399606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.399671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.399941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.400005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.400318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.400383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.400641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.400706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.400950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.401015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.401246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.401311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.401632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.401697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.402019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.402084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.402398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.402463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.402783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.402871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.403217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.403282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.403564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.403629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.403969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.404053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.404352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.404417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.404715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.583 [2024-12-06 19:26:45.404795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.583 qpair failed and we were unable to recover it. 00:28:00.583 [2024-12-06 19:26:45.405115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.405180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.405448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.405512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.405813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.405879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.406192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.406258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.406530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.406595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.406831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.406897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.407225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.407290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.407622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.407686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.408023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.408088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.408400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.408465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.408713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.408804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.409128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.409193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.409503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.409567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.409831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.409899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.410223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.410288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.410624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.410689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.411019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.411084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.411349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.411414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.411750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.411816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.412116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.412181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.412490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.412554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.412846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.412913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.413211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.413276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.413552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.413617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.413953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.414020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.414327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.414392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.414703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.414783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.415068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.415133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.415406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.415471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.415740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.415807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.416124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.416190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.416486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.416550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.416817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.416885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.417164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.417228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.417528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.417592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.417899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.417967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.418271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.418336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.418643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.418718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.418918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.418984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.419201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.419266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.419483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.419547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.419787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.419853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.420072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.420137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.584 qpair failed and we were unable to recover it. 00:28:00.584 [2024-12-06 19:26:45.420369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.584 [2024-12-06 19:26:45.420434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.420642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.420706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.420970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.421035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.421274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.421339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.421570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.421634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.421869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.421935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.422157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.422221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.422468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.422531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.422791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.422858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.423109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.423175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.423410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.423474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.423744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.423811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.424037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.424102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.424342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.424406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.424662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.424757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.424978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.425044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.425298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.425363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.425629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.425694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.425952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.426017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.426251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.426316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.426562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.426627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.426894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.426961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.427202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.427266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.427492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.427557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.427803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.427870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.428110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.428174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.428420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.428484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.428750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.428825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.429031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.429098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.429338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.429403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.429612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.429677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.429905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.429970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.430188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.430253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.430483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.430548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.430781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.430858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.431081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.431146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.431380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.431444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.431671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.431756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.431994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.432059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.432942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.432975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.433111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.433141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.433280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.433310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.433418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.433446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.585 [2024-12-06 19:26:45.433584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.585 [2024-12-06 19:26:45.433614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.585 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.433794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.433824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.433999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.434028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.434200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.434230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.434396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.434426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.434570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.434600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.434736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.434766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.434989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.435018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.435211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.435261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.435404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.435432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.435539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.435568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.435772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.435801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.435932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.435960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.436094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.436122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.436279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.436308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.436453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.436482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.436626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.436656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.436856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.436906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.437083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.437135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.437326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.437377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.437564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.437594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.437697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.437736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.437882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.437943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.438101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.438152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.438369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.438419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.438562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.438591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.438734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.438764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.438935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.438987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.439093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.439121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.439253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.439282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.439437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.439465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.439678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.439712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.439880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.439909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.440040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.440068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.440212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.440240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.440402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.440431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.440569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.440598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.440750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.440780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.440907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.440936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.441060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.441088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.441221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.441250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.441408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.441437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.441637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.441666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.441799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.441828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.441961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.586 [2024-12-06 19:26:45.441990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.586 qpair failed and we were unable to recover it. 00:28:00.586 [2024-12-06 19:26:45.442118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.442147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.442308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.442337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.442535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.442563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.442660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.442688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.442880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.442940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.443091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.443140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.443270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.443299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.443431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.443459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.443584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.443613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.443742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.443773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.443939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.443998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.444176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.444228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.444393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.444422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.444560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.444589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.444729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.444759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.444904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.444954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.445086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.445146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.445300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.445328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.445462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.445490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.445593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.445622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.445718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.445754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.445860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.445888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.446050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.446078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.446237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.446266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.446432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.446460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.446619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.446647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.446799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.446856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.446998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.447050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.447203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.447255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.447382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.447411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.447536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.447565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.447690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.447719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.447884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.447913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.448028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.448056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.448188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.448217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.448374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.448403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.448565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.448594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.448731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.448761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.587 qpair failed and we were unable to recover it. 00:28:00.587 [2024-12-06 19:26:45.448897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.587 [2024-12-06 19:26:45.448925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.449091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.449120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.449280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.449332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.449490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.449518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.449651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.449679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.449827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.449857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.449987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.450016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.450170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.450199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.450331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.450360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.450488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.450516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.450647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.450675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.450822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.450852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.450984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.451012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.451134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.451162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.451335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.451364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.451503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.451532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.451651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.451680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.451826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.451855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.451986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.452015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.452107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.452135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.452266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.452294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.452446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.452475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.452632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.452661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.452793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.452823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.452949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.452978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.453105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.453133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.453261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.453289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.453423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.453451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.453581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.453614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.453751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.453781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.453909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.453938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.454140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.454169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.454301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.454330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.454455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.454484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.454606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.454634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.454799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.454829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.454962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.454991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.455118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.455147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.455281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.455310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.455437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.455466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.455627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.455655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.455789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.455819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.455950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.455979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.456136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.456164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.456297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.588 [2024-12-06 19:26:45.456326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.588 qpair failed and we were unable to recover it. 00:28:00.588 [2024-12-06 19:26:45.456460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.456489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.456644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.456672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.456818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.456848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.456974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.457003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.457157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.457185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.457296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.457325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.457459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.457487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.457617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.457646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.457753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.457783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.457897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.457956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.458134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.458185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.458320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.458349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.458448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.458476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.458606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.458634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.458806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.458835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.458993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.459022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.459150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.459179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.459326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.459355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.459514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.459542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.459700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.459734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.459873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.459925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.460072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.460123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.460248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.460277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.460409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.460438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.460572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.460601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.460775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.460804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.460947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.460976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.461105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.461134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.461289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.461318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.461457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.461486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.461613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.461641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.461795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.461824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.461955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.461984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.462139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.462168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.462322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.462351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.462484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.462513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.462668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.462697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.462869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.462898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.463052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.463109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.463242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.463270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.463426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.463455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.463588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.463617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.463713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.463749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.463911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.589 [2024-12-06 19:26:45.463940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.589 qpair failed and we were unable to recover it. 00:28:00.589 [2024-12-06 19:26:45.464100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.464129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.464248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.464277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.464400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.464429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.464567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.464596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.464737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.464767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.464897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.464926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.465019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.465052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.465210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.465239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.465393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.465422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.465584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.465613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.465778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.465808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.465939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.465968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.466128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.466157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.466288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.466317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.466470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.466498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.466632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.466661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.466777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.466840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.466982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.467033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.467202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.467251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.467351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.467380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.467514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.467543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.467678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.467707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.467890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.467938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.468075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.468125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.468262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.468291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.468422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.468450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.468572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.468601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.468754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.468784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.468890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.468919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.469048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.469076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.469202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.469231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.469352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.469380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.469501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.469529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.469693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.469730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.469826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.469855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.470012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.470040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.470209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.470238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.470328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.470356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.470485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.470514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.470675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.470704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.470859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.470909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.471042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.471091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.471227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.471284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.471407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.590 [2024-12-06 19:26:45.471435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.590 qpair failed and we were unable to recover it. 00:28:00.590 [2024-12-06 19:26:45.471564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.471593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.471752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.471782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.471906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.471939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.472097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.472126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.472283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.472312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.472410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.472438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.472595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.472623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.472799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.472851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.472986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.473035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.473166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.473194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.473331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.473360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.473489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.473518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.473645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.473674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.473816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.473846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.473951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.473980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.474070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.474098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.474236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.474265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.474397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.474425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.474563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.474591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.474759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.474789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.474946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.474974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.475131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.475159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.475293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.475322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.475454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.475483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.475613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.475641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.475790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.475846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.476023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.476076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.476216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.476265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.476394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.476423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.476559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.476588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.476683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.476712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.476837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.476867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.477022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.477051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.477172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.477201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.477333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.477362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.477488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.477517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.477641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.477670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.477837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.591 [2024-12-06 19:26:45.477867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.591 qpair failed and we were unable to recover it. 00:28:00.591 [2024-12-06 19:26:45.477996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.478025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.478181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.478210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.478345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.478374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.478504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.478532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.478693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.478734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.478903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.478932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.479031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.479060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.479215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.479244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.479410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.479439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.479536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.479564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.479684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.479755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.479903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.479950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.480066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.480124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.480282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.480311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.480472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.480501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.480668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.480697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.480870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.480899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.481030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.481059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.481219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.481248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.481406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.481434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.481569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.481597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.481756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.481786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.481914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.481966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.482143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.482192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.482326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.482354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.482455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.482483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.482619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.482648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.482776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.482805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.482939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.482968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.483104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.483132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.483236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.483264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.483427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.483456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.483599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.483628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.483785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.483814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.483942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.483971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.484101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.484130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.484262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.484290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.484409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.484437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.484605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.484634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.484761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.484791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.484916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.484945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.485038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.485067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.592 [2024-12-06 19:26:45.485224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.592 [2024-12-06 19:26:45.485253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.592 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.485413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.485441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.485598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.485634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.485740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.485769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.485933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.485983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.486098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.486156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.486315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.486344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.486497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.486526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.486659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.486687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.486811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.486865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.487013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.487063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.487240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.487287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.487418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.487447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.487576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.487605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.487742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.487771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.487898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.487927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.488087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.488117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.488276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.488305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.488461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.488489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.488650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.488679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.488843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.488894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.489035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.489083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.489190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.489247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.489393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.489421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.489550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.489579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.489698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.489738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.489879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.489908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.490038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.490067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.490201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.490229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.490359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.490388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.490553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.490581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.490679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.490708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.490821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.490850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.491004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.491032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.491160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.491189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.491352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.491381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.491508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.491537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.491692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.491730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.491903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.491953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.492126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.492175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.492344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.492392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.492522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.492550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.593 [2024-12-06 19:26:45.492683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.593 [2024-12-06 19:26:45.492716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.593 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.492866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.492895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.493026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.493055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.493183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.493212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.493311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.493339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.493499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.493527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.493620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.493649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.493769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.493799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.493958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.493987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.494116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.494145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.494267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.494296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.494434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.494463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.494585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.494614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.494772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.494802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.494934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.494963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.495116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.495145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.495270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.495299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.495423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.495452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.495609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.495637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.495741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.495771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.495942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.495970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.496103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.496152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.496276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.496305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.496465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.496493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.496618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.496647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.496788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.496840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.497014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.497063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.497235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.497286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.497446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.497475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.497610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.497638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.497783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.497838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.498019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.498068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.498213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.498262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.498419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.498447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.498605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.498634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.498792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.498847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.499024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.499074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.499212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.499260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.499419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.499448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.499578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.499606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.499741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.499774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.499918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.499970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.500128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.500157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.500283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.500312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.594 [2024-12-06 19:26:45.500438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.594 [2024-12-06 19:26:45.500467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.594 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.500593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.500623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.500745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.500774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.500944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.500997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.501103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.501131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.501299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.501327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.501450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.501478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.501634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.501663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.501849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.501902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.502022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.502083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.502225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.502275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.502405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.502434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.502565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.502594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.502697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.502733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.502890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.502920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.503020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.503048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.503170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.503198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.503329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.503358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.503491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.503520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.503620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.503649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.503821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.503851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.503985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.504014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.504141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.504169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.504334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.504363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.504521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.504550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.504704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.504740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.504878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.504927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.505071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.505126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.505246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.505275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.505396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.505424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.505583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.505612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.505768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.505797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.505925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.505954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.506080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.506109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.506210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.506239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.506399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.506427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.506555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.506588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.506708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.506744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.506877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.506906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.595 qpair failed and we were unable to recover it. 00:28:00.595 [2024-12-06 19:26:45.507070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.595 [2024-12-06 19:26:45.507098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.507223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.507252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.507347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.507375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.507548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.507577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.507708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.507744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.507848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.507876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.508008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.508037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.508170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.508200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.508333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.508361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.508495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.508523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.508689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.508718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.508835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.508863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.508986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.509015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.509170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.509199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.509358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.509387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.509483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.509511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.509679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.509708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.509874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.509923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.510056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.510106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.510276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.510327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.510456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.510485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.510622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.510651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.510819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.510868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.511044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.511094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.511249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.511298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.511454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.511482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.511611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.511640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.511809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.511859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.512006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.512057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.512230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.512280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.512441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.512470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.512602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.512630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.512735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.512764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.512881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.512938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.513073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.513123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.513272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.513300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.513435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.513464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.513618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.513651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.513774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.513830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.513933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.513962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.514094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.514122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.514298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.514348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.514513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.514541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.596 qpair failed and we were unable to recover it. 00:28:00.596 [2024-12-06 19:26:45.514670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.596 [2024-12-06 19:26:45.514699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.514824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.514879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.515032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.515081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.515208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.515237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.515392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.515421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.515553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.515581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.515740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.515770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.515883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.515937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.516087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.516140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.516274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.516303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.516438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.516466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.516597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.516626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.516755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.516785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.516918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.516966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.517094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.517123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.517254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.517283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.517388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.517417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.517551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.517580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.517740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.517770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.517917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.517965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.518103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.518132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.518267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.518296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.518393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.518421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.518551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.518580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.518706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.518743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.518887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.518942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.519079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.519130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.519259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.519288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.519421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.519449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.519580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.519609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.519739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.519770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.519900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.519928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.520066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.520095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.520196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.520225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.520351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.520387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.520524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.520552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.520648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.520676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.520836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.520866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.520965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.520994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.521154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.521183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.521310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.521339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.521472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.521501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.521631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.521660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.521834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.521883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.597 [2024-12-06 19:26:45.522060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.597 [2024-12-06 19:26:45.522109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.597 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.522281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.522330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.522495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.522523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.522655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.522684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.522808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.522834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.522983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.523008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.523156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.523206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.523366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.523395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.523551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.523580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.523715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.523754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.523912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.523940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.524042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.524071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.524164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.524193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.524352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.524380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.524477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.524506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.524643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.524672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.524806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.524860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.524970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.524999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.525160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.525189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.525293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.525322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.525446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.525474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.525599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.525628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.525783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.525813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.525934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.525963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.526087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.526116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.526274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.526302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.526436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.526465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.526601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.526631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.526786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.526816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.526937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.526966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.527122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.527155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.527249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.527278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.527408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.527437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.527574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.527603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.527707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.527743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.527903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.527931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.528092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.528121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.528248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.528277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.528432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.528461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.528564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.528593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.528747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.528777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.528948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.528997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.529172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.529221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.529377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.529406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.598 qpair failed and we were unable to recover it. 00:28:00.598 [2024-12-06 19:26:45.529567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.598 [2024-12-06 19:26:45.529596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.529736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.529766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.529936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.529984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.530124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.530172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.530305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.530334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.530488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.530516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.530616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.530644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.530817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.530868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.531035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.531084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.531255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.531305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.531463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.531492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.531623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.531652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.531772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.531829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.531973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.532025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.532193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.532241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.532373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.532402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.532561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.532590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.532691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.532727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.532868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.532917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.533067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.533117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.533238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.533267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.533404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.533433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.533567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.533595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.533735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.533765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.533899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.533928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.534057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.534085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.534182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.534215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.534377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.534405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.534540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.534568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.534732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.534762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.534917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.534946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.535076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.535104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.535232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.535261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.535416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.535444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.535601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.535629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.535757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.535786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.535930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.535978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.536126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.536171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.599 qpair failed and we were unable to recover it. 00:28:00.599 [2024-12-06 19:26:45.536300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.599 [2024-12-06 19:26:45.536329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.536455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.536483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.536645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.536674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.536848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.536878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.537009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.537037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.537167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.537195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.537324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.537352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.537481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.537509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.537646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.537674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.537813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.537842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.537981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.538010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.538166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.538195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.538325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.538353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.538482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.538510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.538634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.538663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.538803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.538833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.538964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.538993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.539096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.539124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.539246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.539275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.539431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.539459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.539588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.539617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.539749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.539779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.539911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.539940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.540101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.540130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.540287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.540315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.540473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.540501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.540662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.540691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.540852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.540902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.541027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.541081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.541258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.541306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.541421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.541450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.541577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.541606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.541767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.541797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.541892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.541921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.542075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.542104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.542208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.542237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.542367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.542396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.542554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.542582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.542685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.542713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.542830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.542860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.542988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.543016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.543116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.543145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.600 [2024-12-06 19:26:45.543274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.600 [2024-12-06 19:26:45.543304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.600 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.543424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.543452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.543586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.543615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.543740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.543770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.543905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.543934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.544069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.544119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.544247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.544276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.544409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.544438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.544569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.544598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.544732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.544761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.544897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.544947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.545065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.545120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.545278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.545306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.545460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.545504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.545644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.545675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.545827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.545857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.546016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.546045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.546205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.546234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.546396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.546426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.546584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.546614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.546767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.546822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.546994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.547043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.547172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.547223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.547399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.547448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.547545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.547574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.547715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.547751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.547922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.547951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.548151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.548194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.548387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.548428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.548616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.548658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.548845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.548874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.549028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.549066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.549270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.549311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.549438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.549479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.549687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.549740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.549886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.549917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.550066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.550115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.550229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.550282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.550436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.550488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.550616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.550644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.550762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.550793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.550930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.550960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.551121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.551148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.551311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.601 [2024-12-06 19:26:45.551341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.601 qpair failed and we were unable to recover it. 00:28:00.601 [2024-12-06 19:26:45.551512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.551541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.551697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.551734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.551870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.551900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.552038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.552067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.552191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.552220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.552349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.552378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.552510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.552539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.552640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.552669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.552824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.552854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.552984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.553019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.553148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.553178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.553333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.553362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.553492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.553521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.553623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.553652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.553806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.553836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.553996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.554025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.554143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.554172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.554263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.554292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.554446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.554475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.554629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.554658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.554825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.554875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.555000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.555058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.555225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.555274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.555438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.555467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.555571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.555600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.555698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.555738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.555909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.555958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.556132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.556183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.556311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.556341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.556474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.556502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.556663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.556692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.556878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.556930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.557074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.557127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.557264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.557293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.557419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.557448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.557551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.557580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.557730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.557760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.557886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.557915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.558018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.558047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.558174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.558203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.558357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.558386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.558519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.558548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.558709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.558755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.558914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.558944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.602 [2024-12-06 19:26:45.559103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.602 [2024-12-06 19:26:45.559131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.602 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.559230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.559260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.559418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.559447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.559543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.559572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.559700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.559739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.559885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.559941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.560069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.560120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.560254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.560304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.560430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.560459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.560591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.560620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.560780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.560810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.560945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.560974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.561114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.561142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.561309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.561338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.561459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.561487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.561586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.561614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.561778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.561808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.561939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.561968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.562102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.562130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.562299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.562329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.562484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.562513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.562641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.562670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.562849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.562880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.563011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.563040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.563195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.563223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.563357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.563386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.563512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.563541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.563662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.563691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.563831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.563861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.564016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.564045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.564202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.564231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.564400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.564430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.564591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.564620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.564749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.564779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.564925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.564976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.565102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.565154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.565317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.565346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.565447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.565477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.565633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.565662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.565771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.565800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.565971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.603 [2024-12-06 19:26:45.566000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.603 qpair failed and we were unable to recover it. 00:28:00.603 [2024-12-06 19:26:45.566180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.566231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.566387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.566416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.566538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.566567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.566681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.566741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.566895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.566950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.567127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.567177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.567305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.567334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.567492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.567521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.567651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.567680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.567811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.567841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.567997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.568027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.568162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.568191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.568316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.568345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.568466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.568495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.568654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.568682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.568814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.568843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.568971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.569001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.569145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.569174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.569334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.569363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.569485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.569513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.569650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.569679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.569824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.569854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.570013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.570043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.570208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.570237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.570393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.570422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.570579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.570608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.570769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.570799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.570962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.571031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.571175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.571264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.571424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.571453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.571595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.571624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.571757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.571787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.571946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.571975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.572116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.572167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.572321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.572349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.572481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.572509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.572632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.572661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.572806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.572857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.572994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.573044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.573190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.573239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.573397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.573426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.573584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.573613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.573794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.573844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.573977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.574027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.574163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.574197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.574356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.604 [2024-12-06 19:26:45.574385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.604 qpair failed and we were unable to recover it. 00:28:00.604 [2024-12-06 19:26:45.574506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.574535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.574642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.574670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.574810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.574840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.574962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.574991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.575147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.575175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.575303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.575331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.575421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.575450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.575562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.575591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.575713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.575751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.575906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.575935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.576064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.576093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.576216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.576245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.576417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.576446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.576578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.576607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.576741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.576771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.576930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.576959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.577134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.577184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.577311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.577340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.577468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.577497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.577624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.577653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.577795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.577848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.577977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.578028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.578197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.578226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.578382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.578411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.578533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.578562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.578693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.578737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.578867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.578896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.579000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.579029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.579163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.579192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.579347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.579376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.579505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.579533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.579665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.579694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.579872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.579902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.580038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.580067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.580203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.580232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.580389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.580418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.580523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.580551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.580713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.580751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.580906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.580940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.581109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.581137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.581271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.581300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.581456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.581485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.581642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.581671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.581845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.581874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.582009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.582038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.582180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.582228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.605 [2024-12-06 19:26:45.582362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.605 [2024-12-06 19:26:45.582391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.605 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.582549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.582577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.582756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.582786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.582927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.582956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.583081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.583110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.583243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.583272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.583434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.583463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.583591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.583620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.583748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.583778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.583937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.583966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.584124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.584153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.584307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.584336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.584496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.584526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.584630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.584659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.584793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.584843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.585009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.585062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.585228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.585275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.585404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.585432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.585565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.585595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.585772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.585827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.586011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.586061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.586200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.586250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.586412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.586441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.586585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.586614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.586729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.586759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.586939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.586988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.587135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.587182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.587345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.587374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.587514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.587543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.587646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.587675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.587929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.587960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.588128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.588177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.588339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.588391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.588557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.588585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.588728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.588758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.588881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.588909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.589034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.589063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.589194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.589223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.589349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.589378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.589507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.589536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.589656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.589685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.589790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.589819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.589945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.589974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.590071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.590100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.590256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.590285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.590444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.590473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.606 [2024-12-06 19:26:45.590645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.606 [2024-12-06 19:26:45.590674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.606 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.590826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.590856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.590992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.591021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.591176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.591205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.591363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.591391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.591552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.591581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.591710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.591749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.591904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.591933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.592067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.592096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.592250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.592279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.592379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.592407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.592569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.592597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.592733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.592763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.592932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.592961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.593121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.593149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.593276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.593305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.593427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.593456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.593589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.593618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.593745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.593775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.593899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.593929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.594037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.594065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.594222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.594251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.594410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.594439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.594572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.594601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.594737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.594767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.594899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.594928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.595050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.595108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.595234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.595263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.595394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.595423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.595583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.595612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.595744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.595774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.595895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.595925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.596056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.596084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.596207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.596236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.596396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.596425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.596521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.596550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.596680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.596709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.596845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.596873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.597006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.597058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.597190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.597219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.597382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.597411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.607 [2024-12-06 19:26:45.597532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.607 [2024-12-06 19:26:45.597562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.607 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.597683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.597712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.597853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.597882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.598043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.598072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.598202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.598231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.598359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.598387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.598490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.598519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.598666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.598695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.598866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.598895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.599025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.599054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.599148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.599176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.599329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.599357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.599527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.599557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.599683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.599712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.599849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.599897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.600037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.600066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.600165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.600194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.600352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.600381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.600518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.600548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.600673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.600702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.600869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.600899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.601024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.601053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.601181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.601210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.601342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.601370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.601478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.601507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.601666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.601700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.601847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.601876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.601975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.602004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.602124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.602153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.602311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.602340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.602477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.602505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.602663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.602692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.602839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.602888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.603082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.603143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.603322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.603380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.603508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.603537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.603686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.603715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.603889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.603918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.604071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.604100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.604296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.604351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.604486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.604515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.604672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.604701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.608 [2024-12-06 19:26:45.604851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.608 [2024-12-06 19:26:45.604900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.608 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.605005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.605035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.605176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.605206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.605369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.605398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.605553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.605582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.605709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.605748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.605888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.605917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.606049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.606078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.606201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.606230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.606364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.606393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.606541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.606585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.606735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.606767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.606907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.606937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.607104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.607133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.607230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.607258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.607419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.607448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.607585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.607616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.607729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.607758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.607930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.891 [2024-12-06 19:26:45.607979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.891 qpair failed and we were unable to recover it. 00:28:00.891 [2024-12-06 19:26:45.608127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.608176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.608310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.608359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.608519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.608548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.608705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.608741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.608903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.608932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.609070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.609100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.609257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.609286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.609439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.609468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.609626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.609654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.609817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.609866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.610032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.610081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.610219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.610269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.610401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.610451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.610584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.610612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.610812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.610863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.610994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.611024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.611159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.611189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.611286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.611316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.611497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.611526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.611684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.611713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.611825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.611854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.611979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.612008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.612133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.612167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.612270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.612300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.612468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.612497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.612648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.612677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.612887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.612940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.613121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.613168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.613353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.613406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.613565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.613595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.613733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.613762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.613907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.613967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.614156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.614203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.614389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.614447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.614602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.614631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.614790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.614848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.614994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.615057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.615245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.892 [2024-12-06 19:26:45.615297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.892 qpair failed and we were unable to recover it. 00:28:00.892 [2024-12-06 19:26:45.615411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.615440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.615564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.615592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.615715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.615753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.615895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.615924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.616078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.616107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.616265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.616295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.616430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.616458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.616590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.616619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.616750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.616780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.616917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.616945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.617103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.617132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.617288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.617317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.617427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.617456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.617611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.617640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.617780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.617833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.617995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.618044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.618217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.618265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.618440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.618469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.618592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.618621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.618791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.618844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.619018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.619048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.619219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.619266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.619396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.619425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.619529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.619557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.619690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.619719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.619890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.619919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.620082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.620111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.620271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.620300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.620507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.620537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.620662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.620690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.620834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.620864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.621031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.621060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.621160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.621189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.621281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.621314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.621452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.621481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.621582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.621610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.621771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.621801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.621898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.893 [2024-12-06 19:26:45.621927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.893 qpair failed and we were unable to recover it. 00:28:00.893 [2024-12-06 19:26:45.622059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.622088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.622227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.622256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.622413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.622442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.622560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.622589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.622753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.622783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.622885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.622914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.623072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.623100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.623261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.623290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.623448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.623476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.623626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.623655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.623799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.623850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.623987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.624035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.624161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.624210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.624342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.624371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.624530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.624559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.624663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.624692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.624876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.624905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.625035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.625063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.625196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.625225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.625381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.625411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.625568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.625597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.625732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.625762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.625867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.625897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.626025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.626054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.626216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.626245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.626371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.626400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.626529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.626558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.626716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.626768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.626944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.626992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.627122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.627151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.627310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.627339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.627498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.627527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.627632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.627662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.627776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.627832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.627979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.628032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.628168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.628223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.628325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.628354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.628455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.894 [2024-12-06 19:26:45.628484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.894 qpair failed and we were unable to recover it. 00:28:00.894 [2024-12-06 19:26:45.628576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.628605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.628759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.628789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.628954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.628983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.629112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.629140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.629297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.629325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.629455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.629484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.629589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.629617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.629749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.629779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.629949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.629978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.630143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.630191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.630345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.630374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.630517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.630547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.630699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.630735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.630874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.630923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.631103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.631152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.631321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.631369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.631502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.631531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.631627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.631657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.631802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.631851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.632021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.632070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.632245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.632293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.632402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.632430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.632561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.632590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.632750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.632780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.632966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.633016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.633142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.633192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.633330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.633359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.633490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.633519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.633622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.633650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.633797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.633846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.633977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.634034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.634193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.634222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.634320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.634349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.634553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.634582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.634755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.634785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.634930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.634979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.895 [2024-12-06 19:26:45.635181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.895 [2024-12-06 19:26:45.635228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.895 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.635351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.635384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.635546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.635575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.635711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.635768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.635931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.635980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.636145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.636174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.636301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.636329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.636484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.636513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.636641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.636671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.636832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.636862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.637017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.637046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.637170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.637199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.637356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.637385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.637505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.637534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.637670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.637699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.637872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.637901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.638032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.638060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.638193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.638222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.638342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.638371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.638525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.638554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.638688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.638717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.638885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.638915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.639082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.639111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.639248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.639277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.639407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.639436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.639562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.639591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.639717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.639756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.639885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.639915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.640049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.640078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.640236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.640265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.640465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.640493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.640621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.640649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.640796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.640847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.640988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.641037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.641173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.896 [2024-12-06 19:26:45.641222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.896 qpair failed and we were unable to recover it. 00:28:00.896 [2024-12-06 19:26:45.641382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.641411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.641540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.641568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.641702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.641762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.641904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.641933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.642064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.642093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.642225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.642253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.642391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.642425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.642585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.642614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.642744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.642773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.642905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.642935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.643059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.643087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.643208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.643236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.643399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.643428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.643585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.643614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.643740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.643770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.643976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.644024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.644182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.644231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.644352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.644381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.644517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.644546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.644702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.644738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.644886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.644932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.645106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.645155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.645290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.645337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.645494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.645523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.645650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.645679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.645842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.645870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.645999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.646027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.646158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.646185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.646394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.646424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.646550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.646580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.646756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.646786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.646922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.646952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.647108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.647138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.647274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.647304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.647395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.647425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.647630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.647660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.647846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.647898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.648043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.897 [2024-12-06 19:26:45.648093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.897 qpair failed and we were unable to recover it. 00:28:00.897 [2024-12-06 19:26:45.648271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.648320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.648449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.648478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.648635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.648665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.648879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.648930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.649087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.649137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.649287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.649337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.649466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.649495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.649651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.649681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.649843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.649878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.650057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.650107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.650248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.650296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.650453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.650483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.650688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.650717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.650887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.650917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.651061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.651090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.651229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.651281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.651405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.651435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.651561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.651590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.651795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.651825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.651980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.652009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.652108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.652137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.652293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.652322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.652462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.652492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.652649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.652679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.652814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.652844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.652995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.653025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.653191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.653221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.653354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.653383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.653540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.653569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.653729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.653760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.653855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.653884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.654099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.654152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.654335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.654382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.654510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.654540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.654637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.654666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.654901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.654954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.655140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.655189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.655336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.655386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.898 qpair failed and we were unable to recover it. 00:28:00.898 [2024-12-06 19:26:45.655517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.898 [2024-12-06 19:26:45.655547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.655702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.655743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.655906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.655959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.656088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.656117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.656211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.656240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.656395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.656425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.656553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.656583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.656741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.656772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.656895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.656953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.657096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.657155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.657289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.657323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.657482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.657512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.657635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.657665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.657813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.657867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.658003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.658067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.658203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.658233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.658364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.658394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.658517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.658547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.658701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.658739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.658882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.658911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.659069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.659098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.659201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.659230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.659433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.659463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.659596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.659625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.659751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.659782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.659967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.660019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.660202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.660265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.660393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.660423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.660547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.660577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.660703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.660754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.660864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.660894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.661098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.661128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.661313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.661363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.661521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.661550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.661676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.661706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.661873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.661903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.662060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.662089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.662225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.899 [2024-12-06 19:26:45.662255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.899 qpair failed and we were unable to recover it. 00:28:00.899 [2024-12-06 19:26:45.662350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.662380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.662510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.662540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.662642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.662672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.662814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.662845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.663004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.663034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.663162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.663191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.663346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.663376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.663499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.663529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.663684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.663713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.663882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.663912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.664013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.664042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.664176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.664206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.664365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.664401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.664504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.664534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.664694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.664745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.664884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.664937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.665107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.665156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.665306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.665355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.665512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.665542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.665642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.665671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.665860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.665912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.666089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.666146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.666301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.666351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.666484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.666513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.666717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.666756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.666888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.666917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.667079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.667109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.667265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.667295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.667423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.667453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.667581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.667610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.667815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.667845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.668002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.668031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.668186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.668215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.668344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.900 [2024-12-06 19:26:45.668374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.900 qpair failed and we were unable to recover it. 00:28:00.900 [2024-12-06 19:26:45.668496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.668525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.668646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.668675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.668840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.668871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.669025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.669055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.669183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.669212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.669373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.669402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.669531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.669560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.669663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.669693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.669855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.669884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.670013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.670043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.670177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.670206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.670328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.670358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.670516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.670545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.670674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.670704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.670871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.670900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.671028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.671058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.671152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.671181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.671340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.671369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.671496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.671530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.671665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.671694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.671865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.671895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.672025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.672054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.672179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.672209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.672365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.672394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.672492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.672521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.672646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.672676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.672794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.672825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.672954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.672983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.673112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.673142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.673298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.673327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.673457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.673486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.673644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.673673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.673790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.673820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.673975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.674005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.674160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.674189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.674289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.674318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.674475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.674504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.674633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.901 [2024-12-06 19:26:45.674662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.901 qpair failed and we were unable to recover it. 00:28:00.901 [2024-12-06 19:26:45.674789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.674819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.674994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.675054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.675220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.675271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.675429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.675458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.675566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.675596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.675752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.675782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.675896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.675958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.676102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.676153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.676313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.676342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.676475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.676505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.676633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.676663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.676765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.676795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.676938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.676988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.677159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.677211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.677364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.677393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.677545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.677574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.677700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.677738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.677920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.677970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.678185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.678237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.678402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.678431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.678589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.678623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.678733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.678763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.678905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.678958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.679095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.679149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.679303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.679353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.679508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.679537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.679664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.679693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.679831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.679861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.679989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.680018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.680136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.680165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.680268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.680297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.680421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.680451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.680607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.680636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.680795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.680826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.680962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.680992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.681150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.681180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.681347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.681377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.681510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.902 [2024-12-06 19:26:45.681539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.902 qpair failed and we were unable to recover it. 00:28:00.902 [2024-12-06 19:26:45.681672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.681702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.681812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.681842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.681937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.681967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.682096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.682126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.682255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.682284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.682416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.682446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.682542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.682571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.682700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.682739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.682878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.682908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.683073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.683103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.683202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.683232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.683332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.683361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.683490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.683520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.683675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.683704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.683847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.683876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.684029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.684059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.684183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.684212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.684341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.684371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.684499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.684528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.684625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.684655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.684813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.684843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.684998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.685028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.685150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.685185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.685292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.685322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.685448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.685477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.685634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.685663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.685804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.685834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.685921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.685950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.686105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.686134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.686294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.686323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.686480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.686508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.686648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.686676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.686794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.686824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.686984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.687037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.687253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.687305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.687466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.687496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.687658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.687688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.687835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.903 [2024-12-06 19:26:45.687864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.903 qpair failed and we were unable to recover it. 00:28:00.903 [2024-12-06 19:26:45.687975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.688027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.688199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.688249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.688411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.688440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.688633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.688674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.688842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.688874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.689013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.689063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.689211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.689258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.689420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.689467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.689601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.689630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.689787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.689817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.689916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.689945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.690051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.690081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.690236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.690265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.690421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.690450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.690583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.690612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.690769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.690799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.690928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.690957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.691090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.691119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.691245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.691274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.691433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.691462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.691596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.691624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.691782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.691812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.691946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.691975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.692105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.692134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.692288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.692321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.692478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.692508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.692641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.692670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.692797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.692827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.692951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.692980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.693078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.693106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.693208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.693236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.693393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.693422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.693554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.693583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.693714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.693750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.693878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.693908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.694036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.694065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.694161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.694190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.694295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.694324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.904 qpair failed and we were unable to recover it. 00:28:00.904 [2024-12-06 19:26:45.694490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.904 [2024-12-06 19:26:45.694519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.694619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.694648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.694772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.694802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.694901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.694930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.695054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.695083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.695179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.695208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.695329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.695358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.695448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.695477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.695605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.695633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.695734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.695764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.695890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.695919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.696045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.696073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.696197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.696226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.696359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.696388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.696546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.696574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.696670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.696698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.696811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.696841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.696964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.696992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.697123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.697151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.697251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.697280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.697435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.697464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.697617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.697646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.697819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.697864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.698077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.698107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.698231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.698260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.698415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.698443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.698558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.698593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.698691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.698735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.698904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.698935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.699054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.699084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.699265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.699295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.699445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.699501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.699662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.699691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.699848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.699877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.905 qpair failed and we were unable to recover it. 00:28:00.905 [2024-12-06 19:26:45.700060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.905 [2024-12-06 19:26:45.700117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.700311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.700365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.700500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.700540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.700700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.700741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.700878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.700907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.701025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.701055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.701216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.701251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.701444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.701494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.701657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.701686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.701865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.701909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.702115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.702166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.702305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.702359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.702546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.702595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.702728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.702758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.702867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.702897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.703019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.703049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.703249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.703318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.703478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.703507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.703623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.703652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.703785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.703837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.704005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.704035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.704185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.704236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.704367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.704417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.704544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.704574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.704685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.704750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.704968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.704999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.705126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.705155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.705371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.705433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.705647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.705699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.705853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.705883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.706031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.706081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.706225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.706273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.706475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.706505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.706680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.706726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.706827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.706857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.706959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.706987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.707099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.707160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.707300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.707352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.906 [2024-12-06 19:26:45.707461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.906 [2024-12-06 19:26:45.707503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.906 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.707684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.707712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.707839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.707884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.708005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.708034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.708155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.708184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.708307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.708335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.708548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.708577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.708750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.708780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.708905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.708935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.709052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.709081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.709205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.709233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.709351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.709379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.709525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.709554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.709779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.709808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.709931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.709960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.710087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.710116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.710266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.710295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.710482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.710522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.710654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.710683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.710783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.710812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.710906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.710935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.711032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.711077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.711233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.711261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.711436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.711465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.711627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.711656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.711780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.711810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.711933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.711963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.712112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.712165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.712266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.712310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.712439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.712468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.712632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.712660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.712812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.712841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.712937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.712966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.713053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.713081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.713207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.713235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.713348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.713377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.713500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.713529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.713632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.907 [2024-12-06 19:26:45.713661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.907 qpair failed and we were unable to recover it. 00:28:00.907 [2024-12-06 19:26:45.713833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.713863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.713987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.714016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.714107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.714136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.714259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.714288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.714380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.714417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.714560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.714588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.714732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.714775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.714900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.714929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.715048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.715077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.715240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.715269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.715423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.715452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.715600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.715629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.715753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.715783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.715887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.715917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.716076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.716105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.716229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.716258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.716371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.716400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.716523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.716552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.716673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.716702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.716835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.716864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.716989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.717018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.717143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.717171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.717289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.717318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.717434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.717467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.717587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.717615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.717704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.717740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.717897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.717926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.718052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.718080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.718204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.718232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.718324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.718352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.718479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.718507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.718658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.718687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.718882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.718929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.719050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.719085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.719249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.719298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.719421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.719475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.719636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.719665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.719816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.719846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.908 [2024-12-06 19:26:45.720018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.908 [2024-12-06 19:26:45.720065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.908 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.720207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.720261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.720419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.720448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.720609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.720639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.720817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.720847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.720975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.721019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.721185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.721242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.721420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.721472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.721600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.721628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.721775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.721819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.721977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.722007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.722152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.722182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.722397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.722470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.722672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.722702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.722852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.722880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.723020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.723105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.723265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.723315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.723449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.723507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.723627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.723656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.723782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.723828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.724062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.724112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.724280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.724331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.724453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.724482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.724669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.724698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.724820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.724851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.725100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.725157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.725365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.725416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.725508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.725537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.725705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.725747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.725910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.725954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.726104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.726148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.726289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.726334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.726482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.726510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.726685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.726733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.909 qpair failed and we were unable to recover it. 00:28:00.909 [2024-12-06 19:26:45.726880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.909 [2024-12-06 19:26:45.726924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.727063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.727092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.727263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.727292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.727445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.727473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.727588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.727617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.727744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.727773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.727899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.727943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.728117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.728169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.728329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.728401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.728522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.728550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.728681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.728710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.728866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.728894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.729080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.729108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.729228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.729256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.729392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.729421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.729552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.729580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.729741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.729770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.729896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.729940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.730136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.730190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.730310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.730338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.730428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.730457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.730604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.730632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.730767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.730797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.730920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.730948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.731107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.731135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.731284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.731313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.731424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.731452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.731620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.731648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.731781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.731828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.732043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.732093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.732234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.732282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.732398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.732430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.732633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.732662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.732771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.732802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.732929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.732958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.733082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.733111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.733235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.733279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.733444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.733472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.910 [2024-12-06 19:26:45.733593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.910 [2024-12-06 19:26:45.733622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.910 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.733751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.733780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.733905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.733934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.734056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.734084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.734197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.734225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.734342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.734370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.734523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.734552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.734767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.734797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.734917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.734946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.735077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.735106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.735271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.735299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.735425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.735453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.735577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.735606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.735735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.735764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.735887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.735916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.736023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.736051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.736251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.736280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.736407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.736435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.736560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.736589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.736685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.736713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.736850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.736879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.737006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.737035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.737190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.737218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.737426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.737455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.737587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.737616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.737808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.737838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.737956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.737984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.738123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.738152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.738294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.738323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.738505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.738532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.738730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.738758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.738854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.738883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.739014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.739041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.739157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.739190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.739351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.739379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.739507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.739535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.739733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.739762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.739889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.739946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.740091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.911 [2024-12-06 19:26:45.740142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.911 qpair failed and we were unable to recover it. 00:28:00.911 [2024-12-06 19:26:45.740246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.740274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.740405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.740433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.740564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.740592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.740740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.740768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.740896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.740925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.741049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.741077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.741214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.741243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.741428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.741468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.741571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.741599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.741848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.741887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.742044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.742073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.742276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.742304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.742447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.742475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.742655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.742683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.742829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.742881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.743047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.743097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.743214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.743268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.743373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.743401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.743520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.743548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.743675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.743703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.743828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.743856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.743988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.744017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.744144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.744172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.744295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.744324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.744420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.744448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.744591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.744619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.744752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.744782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.744959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.744988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.745162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.745190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.745366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.745394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.745594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.745623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.745798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.745861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.746007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.746053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.746228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.746278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.746421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.746454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.746581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.746609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.746718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.746752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.746875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.912 [2024-12-06 19:26:45.746904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.912 qpair failed and we were unable to recover it. 00:28:00.912 [2024-12-06 19:26:45.747027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.747056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.747179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.747207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.747338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.747379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.747478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.747506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.747622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.747651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.747754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.747783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.747964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.747993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.748123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.748187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.748285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.748313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.748412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.748441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.748578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.748607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.748775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.748813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.748948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.748977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.749124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.749153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.749374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.749402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.749552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.749580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.749712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.749749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.749874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.749903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.750006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.750034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.750135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.750176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.750316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.750344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.750515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.750544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.750695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.750732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.750889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.750918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.751017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.751046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.751167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.751195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.751314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.751343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.751568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.751607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.751738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.751767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.751940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.751990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.752114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.752170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.752269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.752299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.752419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.752447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.752584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.752613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.752783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.752812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.752933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.752961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.753163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.753208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.753304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.913 [2024-12-06 19:26:45.753333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.913 qpair failed and we were unable to recover it. 00:28:00.913 [2024-12-06 19:26:45.753468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.753496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.753617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.753645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.753757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.753786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.753933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.753962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.754132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.754183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.754317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.754344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.754468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.754497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.754629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.754670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.754809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.754863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.755041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.755091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.755273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.755302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.755463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.755492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.755604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.755632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.755736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.755765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.755919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.755972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.756117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.756168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.756291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.756319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.756478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.756507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.756625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.756654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.756774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.756802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.756989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.757018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.757209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.757261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.757382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.757411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.757564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.757592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.757827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.757875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.758023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.758073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.758172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.758201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.758312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.758340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.758494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.758523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.758645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.758672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.758879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.758907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.759044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.759072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.759228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.759257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.914 qpair failed and we were unable to recover it. 00:28:00.914 [2024-12-06 19:26:45.759409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.914 [2024-12-06 19:26:45.759437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.759532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.759560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.759692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.759726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.759849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.759876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.760031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.760059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.760228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.760261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.760383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.760411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.760510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.760539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.760674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.760702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.760863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.760891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.761095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.761124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.761274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.761303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.761422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.761450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.761571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.761600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.761784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.761814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.761963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.761991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.762098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.762126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.762259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.762287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.762492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.762519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.762655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.762683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.762842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.762871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.762978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.763017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.763115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.763142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.763294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.763322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.763429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.763456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.763617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.763646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.763736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.763765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.763970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.764029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.764206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.764255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.764378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.764407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.764573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.764601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.764695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.764728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.764880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.764932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.765060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.765110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.765283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.765343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.765503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.765531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.765656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.765685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.765868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.765897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.915 [2024-12-06 19:26:45.766062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.915 [2024-12-06 19:26:45.766114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.915 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.766213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.766242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.766400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.766428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.766600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.766634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.766786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.766866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.767055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.767110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.767250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.767300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.767445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.767477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.767619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.767647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.767789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.767859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.767986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.768047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.768165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.768193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.768354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.768381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.768533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.768561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.768711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.768753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.768851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.768879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.769056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.769084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.769223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.769273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.769381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.769409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.769563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.769590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.769739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.769767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.769958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.770015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.770141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.770191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.770329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.770358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.770555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.770583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.770704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.770740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.770902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.770953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.771082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.771139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.771287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.771339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.771483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.771512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.771639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.771667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.771850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.771879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.772030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.772058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.772181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.772209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.772333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.772361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.772479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.772507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.772610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.772639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.772759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.772788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.916 [2024-12-06 19:26:45.772909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.916 [2024-12-06 19:26:45.772938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.916 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.773085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.773113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.773214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.773248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.773375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.773404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.773615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.773643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.773745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.773774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.773932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.773960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.774051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.774087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.774229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.774258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.774379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.774412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.774593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.774621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.774733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.774761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.774907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.774959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.775081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.775134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.775280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.775308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.775490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.775519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.775616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.775644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.775800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.775830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.775957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.775986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.776180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.776209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.776307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.776335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.776511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.776540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.776700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.776742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.776866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.776940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.777114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.777162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.777326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.777378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.777466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.777495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.777584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.777612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.777771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.777800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.777940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.777993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.778136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.778165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.778292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.778321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.778444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.778473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.778571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.778599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.778741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.778771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.778952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.779005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.779136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.779165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.779282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.779310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.779425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.917 [2024-12-06 19:26:45.779454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.917 qpair failed and we were unable to recover it. 00:28:00.917 [2024-12-06 19:26:45.779652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.779680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.779802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.779831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.780042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.780070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.780268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.780317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.780443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.780471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.780621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.780649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.780780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.780843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.781023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.781080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.781214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.781265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.781406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.781435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.781593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.781626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.781728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.781757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.781938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.781990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.782164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.782212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.782364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.782393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.782483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.782512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.782601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.782637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.782719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.782753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.782924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.782975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.783092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.783150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.783265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.783294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.783449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.783478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.783598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.783627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.783751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.783780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.783897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.783926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.784080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.784109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.784207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.784236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.784395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.784424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.784541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.784569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.784658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.784687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.784835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.784864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.785031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.785060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.785150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.785179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.785323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.918 [2024-12-06 19:26:45.785351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.918 qpair failed and we were unable to recover it. 00:28:00.918 [2024-12-06 19:26:45.785530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.785559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.785686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.785714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.785826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.785866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.785998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.786026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.786212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.786252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.786414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.786442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.786591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.786619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.786715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.786752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.786898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.786951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.787068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.787096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.787272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.787301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.787424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.787452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.787557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.787585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.787763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.787792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.787886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.787914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.788069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.788097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.788207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.788243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.788397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.788425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.788551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.788580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.788705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.788738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.788926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.788954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.789114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.789143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.789270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.789298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.789415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.789442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.789567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.789596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.789719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.789752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.789964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.790003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.790165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.790194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.790287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.790315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.790436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.790464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.790663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.790703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.790928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.790967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.791137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.791188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.791317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.791375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.791596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.791624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.791788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.791858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.791979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.792040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.792178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.919 [2024-12-06 19:26:45.792227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.919 qpair failed and we were unable to recover it. 00:28:00.919 [2024-12-06 19:26:45.792364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.792392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.792563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.792590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.792789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.792818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.792960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.792988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.793102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.793128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.793305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.793337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.793459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.793488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.793580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.793607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.793732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.793761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.793883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.793911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.794007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.794036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.794173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.794201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.794323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.794351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.794501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.794530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.794639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.794679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.794842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.794871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.794962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.794990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.795140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.795168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.795262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.795291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.795414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.795443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.795543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.795571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.795698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.795734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.795876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.795904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.796088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.796124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.796297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.796326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.796607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.796636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.796754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.796784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.796979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.797037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.797216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.797278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.797436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.797465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.797599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.797627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.797827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.797878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.798055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.798110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.798232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.798289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.798406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.798434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.798676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.798705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.798811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.798840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.798977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.920 [2024-12-06 19:26:45.799004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.920 qpair failed and we were unable to recover it. 00:28:00.920 [2024-12-06 19:26:45.799150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.799177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.799298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.799325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.799488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.799516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.799734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.799772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.799951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.799978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.800139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.800213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.800394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.800422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.800518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.800549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.800706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.800757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.800878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.800935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.801157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.801209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.801362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.801415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.801534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.801562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.801668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.801697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.801896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.801925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.802116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.802143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.802258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.802311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.802425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.802454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.802656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.802684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.802807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.802870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.803088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.803140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.803285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.803336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.803452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.803481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.803602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.803630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.803719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.803766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.803903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.803932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.804034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.804061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.804213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.804242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.804368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.804395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.804521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.804550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.804640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.804668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.804788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.804818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.804935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.804963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.805089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.805117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.805258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.805286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.805387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.805415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.805545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.805573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.805672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.805700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.921 qpair failed and we were unable to recover it. 00:28:00.921 [2024-12-06 19:26:45.805827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.921 [2024-12-06 19:26:45.805856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.805999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.806042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.806168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.806201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.806310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.806347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.806439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.806481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.806646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.806674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.806783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.806814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.806944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.806999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.807169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.807224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.807356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.807415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.807537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.807565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.807673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.807703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.807855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.807906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.808061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.808089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.808242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.808271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.808473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.808501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.808662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.808690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.808843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.808896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.809019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.809048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.809256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.809285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.809407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.809435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.809639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.809668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.809860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.809923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.810075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.810129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.810260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.810287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.810465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.810492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.810612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.810641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.810844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.810898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.811033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.811090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.811240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.811292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.811441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.811468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.811614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.811642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.811789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.811844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.811991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.812063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.812222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.812250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.812371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.812399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.812526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.812554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.922 [2024-12-06 19:26:45.812675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.922 [2024-12-06 19:26:45.812703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.922 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.812832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.812860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.812997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.813025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.813194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.813223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.813349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.813377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.813500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.813527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.813678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.813706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.813814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.813842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.813967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.813996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.814118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.814146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.814270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.814297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.814427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.814455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.814554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.814585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.814694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.814730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.814855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.814883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.815007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.815035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.815137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.815164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.815288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.815315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.815451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.815479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.815607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.815634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.815764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.815793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.815934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.815977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.816187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.816219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.816375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.816406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.816533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.816561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.816786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.816815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.816934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.816973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.817134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.817199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.817445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.817513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.817767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.817796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.817895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.817923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.818056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.818084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.818298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.818361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.818591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.818656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.818835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.923 [2024-12-06 19:26:45.818864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.923 qpair failed and we were unable to recover it. 00:28:00.923 [2024-12-06 19:26:45.819023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.819084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.819402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.819474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.819737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.819790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.819889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.819917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.820040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.820073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.820195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.820226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.820476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.820541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.820740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.820768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.820900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.820928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.821101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.821164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.821410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.821478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.821705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.821785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.821913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.821941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.822073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.822101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.822233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.822314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.822575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.822640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.822908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.822937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.823111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.823175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.823449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.823515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.823739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.823768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.823884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.823914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.824055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.824121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.824390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.824455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.824745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.824792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.824878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.824907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.825040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.825069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.825253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.825317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.825532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.825596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.825834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.825866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.825994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.826069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.826280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.826344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.826560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.826635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.826883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.826915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.827083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.827148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.827500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.827565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.827794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.827823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.827921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.827949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.828089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.828131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.924 [2024-12-06 19:26:45.828288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.924 [2024-12-06 19:26:45.828353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.924 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.828585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.828649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.828876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.828932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.829153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.829218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.829398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.829462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.829701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.829759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.830075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.830146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.830404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.830477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.830745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.830795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.830945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.831009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.831240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.831304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.831652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.831792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.832014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.832079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.832271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.832336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.832546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.832596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.832817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.832894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.833074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.833139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.833352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.833402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.833592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.833656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.833875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.833940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.834173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.834229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.834540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.834606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.834845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.834910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.835159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.835214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.835431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.835507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.835702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.835794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.836042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.836097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.836461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.836525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.836825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.836898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.837170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.837230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.837606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.837671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.837881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.837948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.838204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.838265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.838595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.838666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.838958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.839022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.839234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.839299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.839521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.839603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.839924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.839989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.840300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.925 [2024-12-06 19:26:45.840359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.925 qpair failed and we were unable to recover it. 00:28:00.925 [2024-12-06 19:26:45.840682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.840767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.841066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.841131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.841385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.841448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.841671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.841755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.841958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.842021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.842249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.842309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.842687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.842774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.843024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.843086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.843272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.843335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.843551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.843615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.843921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.843998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.844344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.844408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.844707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.844803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.844999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.845064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.845275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.845352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.845644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.845710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.845942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.846005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.846376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.846443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.846628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.846702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.846933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.847002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.847287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.847351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.847597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.847661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.847993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.848078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.848381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.848446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.848676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.848758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.848993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.849057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.849299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.849373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.849567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.849636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.849876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.849941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.850225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.850289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.850606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.850681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.851039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.851104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.851346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.851410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.851635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.851700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.851973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.852037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.852297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.852362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.852577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.852648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.926 [2024-12-06 19:26:45.852934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.926 [2024-12-06 19:26:45.852999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.926 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.853253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.853317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.853596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.853662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.853942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.854007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.854254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.854319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.854547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.854619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.854867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.854940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.855212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.855278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.855601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.855672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.855907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.855972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.856183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.856255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.856616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.856682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.856949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.857024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.857295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.857360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.857579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.857644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.857923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.857994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.858269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.858334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.858609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.858673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.858973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.859044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.859304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.859369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.859659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.859752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.860008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.860073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.860391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.860461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.860678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.860760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.860975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.861040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.861224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.861288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.861574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.861649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.861965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.862031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.862347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.862411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.862753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.862822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.863086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.863157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.863468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.863544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.863860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.863926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.927 [2024-12-06 19:26:45.864138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.927 [2024-12-06 19:26:45.864202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.927 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.864450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.864520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.864752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.864818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.865111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.865174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.865492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.865556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.865872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.865955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.866252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.866322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.866563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.866627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.866854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.866924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.867148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.867212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.867438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.867503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.867711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.867803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.868075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.868138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.868371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.868435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.868669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.868775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.869077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.869142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.869345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.869409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.869666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.869747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.870016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.870081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.870399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.870475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.870759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.870825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.871142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.871214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.871556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.871629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.871883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.871959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.872211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.872274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.872536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.872600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.872916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.872993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.873279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.873346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.873561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.873626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.873915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.873981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.874207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.874270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.874506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.874572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.874782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.874852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.875108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.875173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.875384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.875448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.875668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.875748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.875991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.876057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.876356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.876420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.876614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.928 [2024-12-06 19:26:45.876678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.928 qpair failed and we were unable to recover it. 00:28:00.928 [2024-12-06 19:26:45.876897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.876962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.877219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.877286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.877634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.877703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.877998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.878063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.878391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.878455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.878642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.878714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.879015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.879079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.879300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.879364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.879600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.879680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.880013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.880082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.880323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.880387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.880622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.880687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.880916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.880986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.881208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.881273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.881472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.881542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.881781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.881848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.882083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.882147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.882320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.882392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.882606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.882675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.882918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.882992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.883202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.883269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.883461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.883524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.883796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.883871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.884143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.884209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.884455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.884519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.884739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.884805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.885135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.885200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.885435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.885499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.885703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.885815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.886094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.886158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.886394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.886457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.886663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.886752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.887057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.887122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.887308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.887376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.887597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.887661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.887901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.887977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.888254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.888319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.888488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.888553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.929 [2024-12-06 19:26:45.888762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.929 [2024-12-06 19:26:45.888829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.929 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.889051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.889115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.889380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.889451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.889818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.889885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.890179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.890243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.890498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.890562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.890882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.890960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.891236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.891304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.891484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.891549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.891716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.891801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.892026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.892089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.892321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.892392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.892591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.892661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.892949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.893015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.893342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.893415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.893636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.893699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.894019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.894088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.894401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.894466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.894786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.894853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.895206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.895272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.895589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.895654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.895893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.895959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.896185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.896249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.896501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.896565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.896788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.896871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.897142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.897208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.897419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.897487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.897828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.897896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.898131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.898204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.898479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.898544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.898778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.898844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.899196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.899260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.899510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.899589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.899813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.899881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.900098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.900167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.900357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.900429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.900654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.900718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.901022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.901086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.901418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.930 [2024-12-06 19:26:45.901483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.930 qpair failed and we were unable to recover it. 00:28:00.930 [2024-12-06 19:26:45.901781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.901848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.902140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.902205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.902436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.902500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.902783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.902850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.903169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.903240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.903447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.903511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.903680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.903757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.904017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.904081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.904318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.904381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.904611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.904676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.904923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.904990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.905195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.905263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.905532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.905596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.905807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.905874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.906094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.906158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.906345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.906416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.906755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.906822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.907088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.907152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.907389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.907455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.907717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.907799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.908059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.908124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.908345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.908409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.908670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.908755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.908973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.909038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.909264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.909328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.909617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.909691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.910027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.910092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.910404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.910475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.910757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.910823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.911049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.911113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.911437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.911504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.911714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.911795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.912011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.912076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.912307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.912375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.912623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.912688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.912924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.912989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.913207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.913271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.913471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.913539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.931 [2024-12-06 19:26:45.913820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.931 [2024-12-06 19:26:45.913886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.931 qpair failed and we were unable to recover it. 00:28:00.932 [2024-12-06 19:26:45.914090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.932 [2024-12-06 19:26:45.914155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.932 qpair failed and we were unable to recover it. 00:28:00.932 [2024-12-06 19:26:45.914423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.932 [2024-12-06 19:26:45.914489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.932 qpair failed and we were unable to recover it. 00:28:00.932 [2024-12-06 19:26:45.914955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.932 [2024-12-06 19:26:45.915030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.932 qpair failed and we were unable to recover it. 00:28:00.932 [2024-12-06 19:26:45.915318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.932 [2024-12-06 19:26:45.915384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.932 qpair failed and we were unable to recover it. 00:28:00.932 [2024-12-06 19:26:45.915579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.932 [2024-12-06 19:26:45.915642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.932 qpair failed and we were unable to recover it. 00:28:00.932 [2024-12-06 19:26:45.915976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.932 [2024-12-06 19:26:45.916042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.932 qpair failed and we were unable to recover it. 00:28:00.932 [2024-12-06 19:26:45.916272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.932 [2024-12-06 19:26:45.916336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.932 qpair failed and we were unable to recover it. 00:28:00.932 [2024-12-06 19:26:45.916548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.932 [2024-12-06 19:26:45.916612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:00.932 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.916838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.916906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.917097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.917163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.917405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.917472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.917717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.917796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.918083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.918147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.918431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.918494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.918829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.918906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.919260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.919326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.919644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.919707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.920023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.920095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.920416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.920494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.920818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.920884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.921147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.921210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.921519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.921584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.921838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.921905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.922137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.922202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.922424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.922487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.922713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.922813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.923082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.923148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.923394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.923458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.923695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.923777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.924057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.924122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.924474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.924542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.924855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.924921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.925103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.925166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.925397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.925460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.925683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.925762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.926006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.926071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.926387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.926462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.926768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.926834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.927025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.927105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.927365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.927432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.927778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.927852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.928091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.928164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.928349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.928414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.928671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.928760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.929067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.214 [2024-12-06 19:26:45.929130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-06 19:26:45.929415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.929479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.929691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.929790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.930041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.930111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.930291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.930354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.930559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.930623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.930934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.930999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.931263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.931327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.931571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.931635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.931868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.931934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.932137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.932200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.932403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.932469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.932755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.932822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.933057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.933121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.933366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.933440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.933757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.933833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.934062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.934127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.934390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.934460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.934747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.934813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.934998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.935063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.935276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.935340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.935586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.935650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.936004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.936074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.936268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.936339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.936551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.936614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.936896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.936964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.937212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.937279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.937523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.937593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.937847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.937913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.938137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.938202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.938465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.938534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.938799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.938865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.939094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.939159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.939347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.939419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.939628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.939696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.939912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.939976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.940314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.940378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.940738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.940813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.941074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.215 [2024-12-06 19:26:45.941140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.215 qpair failed and we were unable to recover it. 00:28:01.215 [2024-12-06 19:26:45.941343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.941411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.941623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.941691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.942063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.942136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.942494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.942558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.942810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.942877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.943083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.943153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.943409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.943473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.943744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.943810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.944055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.944124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.944352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.944416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.944681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.944757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.944983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.945054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.945414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.945482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.945760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.945827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.946106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.946180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.946519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.946594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.946818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.946884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.947153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.947216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.947559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.947633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.947924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.947990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.948273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.948336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.948599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.948663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.948876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.948948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.949155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.949218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.949531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.949595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.949917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.949982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.950336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.950415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.950637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.950708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.951019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.951084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.951347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.951410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.951701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.951805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.952135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.952201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.952438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.952503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.952692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.952777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.953015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.953078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.953426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.953493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.953808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.953873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.954160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.954224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.216 [2024-12-06 19:26:45.954453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.216 [2024-12-06 19:26:45.954517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.216 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.954838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.954910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.955197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.955262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.955578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.955648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.955932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.955998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.956228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.956293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.956532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.956600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.956841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.956906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.957130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.957194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.957423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.957486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.957703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.957793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.958136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.958208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.958451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.958515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.958771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.958836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.959119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.959189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.959369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.959448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.959692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.959786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.960029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.960093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.960464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.960541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.960875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.960941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.961287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.961351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.961593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.961663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.961932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.961998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.962299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.962362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.962589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.962653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.962949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.963015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.963267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.963333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.963589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.963652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.963856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.963921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.964195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.964260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.964532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.964603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.964790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.964860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.965069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.965134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.965360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.965423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.965606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.965678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.965911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.965976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.966238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.966305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.966491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.966557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.966766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.966831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.217 qpair failed and we were unable to recover it. 00:28:01.217 [2024-12-06 19:26:45.967200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.217 [2024-12-06 19:26:45.967267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.967567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.967632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.967899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.967965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.968289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.968369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.968639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.968705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.969043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.969120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.969318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.969388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.969600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.969676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.969917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.969981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.970206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.970271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.970462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.970530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.970750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.970816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.971082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.971147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.971428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.971492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.971766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.971832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.972176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.972252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.972587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.972651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.972892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.972957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.973210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.973273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.973447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.973513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.973717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.973801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.974011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.974076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.974307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.974371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.974653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.974717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.974954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.975024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.975347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.975418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.975662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.975759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.976057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.976122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.976357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.976422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.976640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.976711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.976931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.976996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.977237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.977300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.218 [2024-12-06 19:26:45.977526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.218 [2024-12-06 19:26:45.977596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.218 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.977910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.977981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.978313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.978381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.978639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.978708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.978970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.979041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.979404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.979473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.979812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.979878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.980108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.980173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.980382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.980445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.980665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.980742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.980967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.981038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.981301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.981369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.981619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.981684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.982031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.982102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.982313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.982384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.982650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.982715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.982972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.983036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.983288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.983351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.983562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.983626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.983993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.984059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.984378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.984447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.984741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.984808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.985000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.985065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.985274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.985337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.985537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.985601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.985824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.985897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.986114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.986178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.986374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.986438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.986642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.986707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.986931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.986996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.987203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.987266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.987444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.987517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.987756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.987828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.988028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.988094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.988290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.988359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.988595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.988666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.988879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.988944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.989266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.989330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.989576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.989648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.219 qpair failed and we were unable to recover it. 00:28:01.219 [2024-12-06 19:26:45.989879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.219 [2024-12-06 19:26:45.989955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.990182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.990245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.990435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.990498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.990712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.990803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.991009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.991088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.991408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.991471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.991698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.991794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.991995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.992058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.992234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.992299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.992601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.992664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.992867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.992932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.993225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.993289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.993491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.993569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.993771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.993837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.994039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.994104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.994279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.994342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.994532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.994596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.994777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.994842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.995047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.995110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.995381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.995446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.995671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.995749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.995960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.996023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.996235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.996304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.996666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.996748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.996883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.996917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.997064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.997097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.997293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.997327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.997467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.997526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.997763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.997830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.998010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.998074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.998303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.998341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.998502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.998575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.998762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.998828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.999052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.999116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.999353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.999417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:45.999703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:45.999799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:46.000012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:46.000077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:46.000244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:46.000308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:46.000509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.220 [2024-12-06 19:26:46.000573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.220 qpair failed and we were unable to recover it. 00:28:01.220 [2024-12-06 19:26:46.000762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.000827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.001027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.001092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.001286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.001358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.001535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.001598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.001784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.001849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.002026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.002091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.002258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.002328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.002567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.002633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.002861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.002928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.003155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.003220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.003428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.003492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.003710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.003795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.003984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.004049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.004264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.004329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.004617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.004681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.004876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.004941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.005147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.005212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.005420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.005484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.005710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.005792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.005973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.006037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.006260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.006324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.006498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.006562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.006832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.006898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.007077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.007142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.007378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.007442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.007643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.007708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.007922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.007987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.008184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.008258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.008475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.008539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.008759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.008826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.009010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.009075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.009299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.009364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.009589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.009652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.009845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.009911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.010150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.010215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.010441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.010505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.010680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.010762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.010969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.011033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.221 [2024-12-06 19:26:46.011252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.221 [2024-12-06 19:26:46.011318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.221 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.011546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.011610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.011820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.011886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.012121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.012184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.012436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.012500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.012685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.012769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.012899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.012933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.013063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.013097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.013304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.013349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.013527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.013592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.013812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.013847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.013950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.013984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.014116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.014149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.014332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.014407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.014692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.014790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.014902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.014936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.015136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.015202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.015376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.015440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.015632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.015670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.015815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.015880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.016066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.016130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.016296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.016372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.016549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.016623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.016818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.016883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.017073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.017137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.017309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.017374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.017607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.017670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.017887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.017952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.018144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.018209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.018388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.018452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.018651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.018716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.018916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.018981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.019212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.019276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.019448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.019512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.019755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.019810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.019928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.019963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.020166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.020207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.020362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.020398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.020525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.222 [2024-12-06 19:26:46.020560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.222 qpair failed and we were unable to recover it. 00:28:01.222 [2024-12-06 19:26:46.020702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.020748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.020886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.020923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.021175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.021212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.021357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.021426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.021637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.021701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.021887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.021923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.022062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.022104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.022240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.022275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.022426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.022462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.022619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.022683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.022887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.022923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.023158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.023193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.023332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.023367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.023584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.023636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.023779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.023815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.023939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.023975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.024086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.024131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.024303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.024338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.024487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.024524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.024714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.024773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.024895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.024930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.025139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.025187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.025337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.025372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.025588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.025623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.025735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.025786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.025901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.025935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.026104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.026141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.026299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.026334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.026554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.026589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.026731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.026791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.026934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.026968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.027159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.027215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.027368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.027411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.027572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.223 [2024-12-06 19:26:46.027613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.223 qpair failed and we were unable to recover it. 00:28:01.223 [2024-12-06 19:26:46.027775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.027810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.027944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.027978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.028197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.028250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.028448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.028487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.028661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.028697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.028836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.028870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.028980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.029014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.029140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.029174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.029329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.029365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.029507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.029542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.029707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.029766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.029878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.029912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.030041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.030076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.030199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.030233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.030394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.030431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.030571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.030604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.030795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.030830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.030946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.030980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.031142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.031175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.031309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.031343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.031480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.031514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.031650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.031684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.031811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.031846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.031985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.032019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.032174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.032208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.032394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.032428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.032577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.032610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.032747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.032782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.032887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.032921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.033127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.033161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.033294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.033328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.033482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.033516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.033680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.033713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.033856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.033890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.034034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.034068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.034219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.034253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 [2024-12-06 19:26:46.034391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.034442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 330035 Killed "${NVMF_APP[@]}" "$@" 00:28:01.224 [2024-12-06 19:26:46.034642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.224 [2024-12-06 19:26:46.034677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.224 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.034802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.034835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.034947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.034981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.035097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.035132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.035387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:01.225 [2024-12-06 19:26:46.035425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.035585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.035618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:01.225 [2024-12-06 19:26:46.035788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.035823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:01.225 [2024-12-06 19:26:46.035929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.035963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.225 [2024-12-06 19:26:46.036150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.036195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.036367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.036400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.036582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.036616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.036760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.036795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.036900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.036934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.037067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.037100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.037210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.037244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.037385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.037418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.037530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.037564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.037680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.037714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.037840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.037873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.037980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.038015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.038149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.038183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.038388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.038425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.038579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.038612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.038755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.038790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.038926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.038960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.039122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.039156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.039348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.039382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.039518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.039552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.039691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.039733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 [2024-12-06 19:26:46.039878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.039912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=330509 00:28:01.225 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:01.225 [2024-12-06 19:26:46.040071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.040105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 330509 00:28:01.225 [2024-12-06 19:26:46.040265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.040299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 330509 ']' 00:28:01.225 [2024-12-06 19:26:46.040456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.040513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.225 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.225 [2024-12-06 19:26:46.040668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.225 [2024-12-06 19:26:46.040738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.225 qpair failed and we were unable to recover it. 00:28:01.226 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:01.226 [2024-12-06 19:26:46.040889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.040924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.226 [2024-12-06 19:26:46.041123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.041157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:01.226 [2024-12-06 19:26:46.041303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.041353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.041492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.041529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.041688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.041730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.041873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.041912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.042015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.042049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.042162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.042195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.042338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.042370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.042518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.042556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.042673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.042707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.042826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.042860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.042977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.043032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.043179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.043217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.043335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.043388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.043532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.043570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.043704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.043751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.043886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.043921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.044061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.044098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.044202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.044252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.044421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.044459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.044581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.044623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.044749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.044805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.044915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.044949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.045163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.045203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.045348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.045385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.045584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.045621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.045761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.045795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.045916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.045951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.046093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.046140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.046300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.046335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.046445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.226 [2024-12-06 19:26:46.046479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.226 qpair failed and we were unable to recover it. 00:28:01.226 [2024-12-06 19:26:46.046617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.046666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.046793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.046830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.046940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.046974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.047122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.047155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.047290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.047325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.047465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.047499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.047622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.047659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.047830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.047865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.048006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.048040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.048147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.048181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.048355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.048390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.048525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.048565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.048712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.048756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.048895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.048929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.049063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.049097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.049212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.049247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.050223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.050267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.050446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.050484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.050638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.050676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.050836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.050871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.050978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.051013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.051133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.051167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.051302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.051337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.051474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.051529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.051647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.051688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.051842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.051877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.052008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.052044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.052182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.052217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.052357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.052391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.052600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.052638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.052774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.052809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.052951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.052985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.053153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.053188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.227 qpair failed and we were unable to recover it. 00:28:01.227 [2024-12-06 19:26:46.053295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.227 [2024-12-06 19:26:46.053330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.053466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.053500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.053604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.053638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.053776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.053813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.053923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.053958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.054077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.054112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.054272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.054307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.054412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.054446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.054608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.054664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.054789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.054823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.054931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.054965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.055169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.055203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.055369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.055406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.055541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.055575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.055687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.055730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.055844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.055880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.055996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.056029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.056162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.056195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.056327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.056360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.056493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.056527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.056703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.056749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.056857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.056891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.056998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.057032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.057172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.057207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.057321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.057355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.057495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.057531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.057673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.057707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.057838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.057874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.058015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.058049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.058151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.058185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.058283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.058317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.058451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.058485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.058651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.058693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.058829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.058865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.058977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.059011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.059112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.059145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.059309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.059343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.228 [2024-12-06 19:26:46.059452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.228 [2024-12-06 19:26:46.059487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.228 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.059597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.059631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.059744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.059778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.059895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.059931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.060076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.060110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.060231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.060265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.061104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.061134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.061325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.061352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.061475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.061502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.061626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.061653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.061779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.061809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.061930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.061957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.062059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.062086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.062206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.062234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.062387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.062413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.062538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.062564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.062660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.062686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.062797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.062824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.062938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.062965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.063056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.063083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.063201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.063228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.063342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.063368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.063515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.063549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.063655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.063681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.063785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.063812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.063905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.063931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.064634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.064664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.064784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.064812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.065455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.065485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.065612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.065638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.065754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.065782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.065878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.065905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.065991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.066018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.066133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.066158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.066305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.066331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.229 [2024-12-06 19:26:46.066463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.229 [2024-12-06 19:26:46.066488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.229 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.066648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.066674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.066783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.066810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.066924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.066952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.067059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.067083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.067198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.067223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.067310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.067335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.067454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.067479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.067560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.067587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.067705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.067752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.067837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.067864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.067954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.067980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.068105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.068144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.068280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.068306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.068395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.068420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.068537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.068563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.068699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.068731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.068855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.068882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.069006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.069032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.069160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.069187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.069327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.069353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.069479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.069505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.069609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.069635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.069749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.069776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.069906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.069950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.070049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.070101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.070224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.070249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.070378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.070403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.070536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.070562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.070712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.070758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.070852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.070878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.070963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.070989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.071090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.071115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.071283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.071308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.071425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.071466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.071565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.071591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.071707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.071738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.071854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.071880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.230 qpair failed and we were unable to recover it. 00:28:01.230 [2024-12-06 19:26:46.071967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.230 [2024-12-06 19:26:46.071993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.072163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.072203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.072333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.072372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.072479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.072505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.072644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.072670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.072804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.072830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.072916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.072942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.073095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.073121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.073230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.073269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.073399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.073425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.073560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.073587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.073701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.073743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.073842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.073868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.073985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.074011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.074120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.074145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.074270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.074296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.074432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.074457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.074581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.074610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.074729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.074756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.074882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.074908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.075064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.075105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.075247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.075271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.075415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.075441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.075577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.075603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.075751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.075778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.075903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.075929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.076020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.076061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.076177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.076203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.076337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.076363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.076530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.076556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.076704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.076743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.076852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.076879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.076993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.077019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.077140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.077166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.077279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.077306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.231 qpair failed and we were unable to recover it. 00:28:01.231 [2024-12-06 19:26:46.077434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.231 [2024-12-06 19:26:46.077460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.077580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.077607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.077702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.077733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.077833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.077859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.077981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.078007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.078100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.078126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.078215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.078241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.078352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.078378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.078468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.078494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.078640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.078670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.078787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.078813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.078896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.078922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.079048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.079075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.079191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.079232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.079352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.079378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.079493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.079519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.079609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.079645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.079784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.079811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.079933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.079960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.080083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.080109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.080264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.080290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.080449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.080475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.080602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.080628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.080733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.080760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.080848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.080874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.080964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.080990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.081112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.081138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.081258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.081285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.081401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.081426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.081578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.081603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.081728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.081769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.081864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.081890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.081983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.232 [2024-12-06 19:26:46.082009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-12-06 19:26:46.082167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.082191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.082305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.082329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.082423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.082447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.082779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.082812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.082940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.082968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.083652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.083681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.083821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.083848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.083967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.083994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.084130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.084154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.084289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.084314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.084447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.084492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.084625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.084651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.084748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.084775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.084897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.084941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.085095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.085137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.085267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.085293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.085435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.085476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.085598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.085625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.085743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.085770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.085870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.085898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.086004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.086030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.086145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.086171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.086320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.086346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.086536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.086562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.086655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.086681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.086798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.086825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.086910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.086936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.087057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.087082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.087181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.087206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.087290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.087315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.087467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.087492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.087601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.087627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.087821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.087848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.087932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.087959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.088048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.233 [2024-12-06 19:26:46.088089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-12-06 19:26:46.088216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.088242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.088359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.088386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.088507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.088533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.088615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.088641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.088761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.088788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.088876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.088902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.089021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.089047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.089169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.089195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.089763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.089794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.089932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.089959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.090047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.090073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.090223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.090250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.090389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.090429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.090621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.090661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.090802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.090829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.090979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.091005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.091186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.091212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.091354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.091379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.091515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.091540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.091705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.091738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.091793] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:28:01.234 [2024-12-06 19:26:46.091835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.091863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 [2024-12-06 19:26:46.091867] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.092016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.092045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.092169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.092194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.092296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.092336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.092513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.092537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.092655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.092681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.092784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.092811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.092926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.092952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.093101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.093141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.093249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.093290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.093412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.093438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.093655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.093681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.093804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.093831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.093981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.094008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.094139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.094180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.094321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.094347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.094498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.234 [2024-12-06 19:26:46.094524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-12-06 19:26:46.094667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.094709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.094844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.094871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.094995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.095022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.095126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.095152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.095264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.095290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.095435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.095461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.096167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.096196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.096365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.096391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.096498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.096524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.096630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.096656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.096773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.096801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.096921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.096947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.097149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.097176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.097310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.097337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.097456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.097482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.097574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.097600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.097685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.097712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.097834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.097860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.097980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.098007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.098114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.098141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.098279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.098305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.098537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.098563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.098654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.098681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.098799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.098826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.098942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.098968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.099131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.099160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.099289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.099315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.099428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.099454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.099609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.099646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.099808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.099835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.099937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.099964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.100074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.100099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.100243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.100268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.100386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.100411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.100580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.100605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.100735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.100762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.100855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.100882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.100979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.101005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.235 [2024-12-06 19:26:46.101870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.235 [2024-12-06 19:26:46.101901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.235 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.102046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.102072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.102209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.102248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.102380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.102405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.102503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.102528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.102638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.102664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.102766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.102794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.102880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.102907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.103044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.103070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.103219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.103259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.103396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.103436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.103556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.103581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.103734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.103761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.103853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.103879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.104000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.104045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.104167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.104191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.104319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.104345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.104474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.104500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.104624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.104650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.104768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.104795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.104885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.104911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.105037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.105063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.105201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.105227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.105365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.105390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.105573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.105599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.105757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.105784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.105875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.105902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.106020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.106045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.106194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.106235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.106366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.106391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.106485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.106509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.106670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.106696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.106836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.106862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.106984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.107010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.107116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.107141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.107235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.107260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.107434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.107460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.107576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.107602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.107757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.236 [2024-12-06 19:26:46.107784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.236 qpair failed and we were unable to recover it. 00:28:01.236 [2024-12-06 19:26:46.107902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.107928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.108081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.108107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.108227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.108258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.108378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.108405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.108636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.108662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.108763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.108791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.108887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.108913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.109065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.109090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.109235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.109273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.109445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.109470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.109607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.109633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.109799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.109825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.109939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.109965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.110672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.110699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.110851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.110879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.111578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.111604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.111736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.111765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.111864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.111890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.111996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.112035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.112160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.112185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.112277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.112302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.112477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.112518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.112644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.112670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.112792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.112819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.112942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.112969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.113106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.113133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.113323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.113349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.113498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.113524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.113673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.113700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.113805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.113831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.113927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.113953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.114069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.114095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.114225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.114266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.237 qpair failed and we were unable to recover it. 00:28:01.237 [2024-12-06 19:26:46.114408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.237 [2024-12-06 19:26:46.114449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.114583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.114609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.114703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.114735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.114850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.114876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.114994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.115035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.115166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.115205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.115359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.115398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.115525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.115550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.115659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.115685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.115795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.115822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.115938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.115981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.116157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.116184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.116345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.116373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.116511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.116538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.116660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.116688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.116792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.116820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.116937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.116963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.117077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.117102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.117243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.117267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.117412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.117437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.117569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.117595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.117741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.117768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.117891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.117918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.118086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.118110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.118258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.118299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.118393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.118419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.118509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.118535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.118627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.118652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.118783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.118811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.118903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.118930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.119054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.119080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.119168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.119195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.119330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.119356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.119476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.119502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.119593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.119619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.119737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.119763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.119854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.119880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.120036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.120083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.120179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.238 [2024-12-06 19:26:46.120218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.238 qpair failed and we were unable to recover it. 00:28:01.238 [2024-12-06 19:26:46.120374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.120400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.120537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.120564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.120651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.120677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.120831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.120857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.120948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.120974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.121079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.121104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.121247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.121272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.121406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.121432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.121589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.121614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.121773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.121800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.121883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.121909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.121994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.122034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.122174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.122199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.122321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.122346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.122520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.122544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.122674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.122700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.122839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.122866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.122955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.122981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.123123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.123148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.123282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.123307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.123426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.123451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.123588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.123613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.123750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.123777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.123895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.123921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.124048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.124073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.124228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.124256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.124386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.124410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.124577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.124602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.124745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.124771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.124858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.124884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.124966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.124993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.125110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.125134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.125301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.125325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.125438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.125462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.125586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.125611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.125711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.125745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.125894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.125920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.126045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.126069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.239 [2024-12-06 19:26:46.126237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.239 [2024-12-06 19:26:46.126261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.239 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.126393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.126418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.126562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.126587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.126688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.126713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.126852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.126880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.126963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.126989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.127108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.127133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.127264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.127289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.127430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.127455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.127591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.127617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.127757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.127784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.127909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.127935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.128059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.128098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.128191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.128216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.128353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.128381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.128524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.128548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.128685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.128731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.128858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.128884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.128978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.129005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.129150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.129174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.129284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.129308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.129456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.129481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.129607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.129647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.129786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.129813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.129929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.129955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.130087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.130112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.130255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.130293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.130393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.130418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.130552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.130577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.130703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.130734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.130833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.130859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.130956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.130982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.131101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.131125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.131217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.131241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.131396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.131421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.131554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.131580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.131699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.131731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.131863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.131889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.132027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.240 [2024-12-06 19:26:46.132051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.240 qpair failed and we were unable to recover it. 00:28:01.240 [2024-12-06 19:26:46.132161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.132186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.132321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.132346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.132477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.132502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.132633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.132658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.132807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.132834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.132947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.132974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.133102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.133142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.133258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.133297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.133457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.133494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.133651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.133675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.133780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.133807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.133946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.133973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.134121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.134144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.134246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.134271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.134386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.134411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.134546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.134570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.134738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.134780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.134883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.134909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.135030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.135056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.135187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.135212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.135374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.135400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.135533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.135559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.135682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.135708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.135833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.135859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.135955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.135982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.136188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.136212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.136342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.136366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.136474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.136499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.136640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.136665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.136798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.136825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.136913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.136940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.137053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.137093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.137180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.241 [2024-12-06 19:26:46.137204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.241 qpair failed and we were unable to recover it. 00:28:01.241 [2024-12-06 19:26:46.137285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.137310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.137467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.137491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.137583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.137608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.137748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.137790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.137905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.137931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.138055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.138095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.138263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.138287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.138415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.138453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.138586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.138610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.138733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.138759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.138904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.138934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.139057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.139096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.139253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.139276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.139368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.139393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.139530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.139555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.139697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.139743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.139869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.139895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.140025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.140050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.140222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.140246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.140371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.140409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.140565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.140590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.140752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.140778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.140895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.140921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.141029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.141054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.141171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.141195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.141318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.141343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.141503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.141527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.141655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.141679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.141798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.141825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.141974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.142017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.142147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.142170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.142303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.142328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.142485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.142524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.142682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.142743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.142878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.142905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.143001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.143042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.143198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.143236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.143366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.143408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.242 [2024-12-06 19:26:46.143566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.242 [2024-12-06 19:26:46.143605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.242 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.143771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.143798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.143949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.143976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.144147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.144170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.144312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.144336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.144458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.144483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.144583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.144608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.144746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.144772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.144895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.144921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.145042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.145066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.145198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.145237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.145395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.145419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.145526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.145550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.145663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.145687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.145803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.145829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.145955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.145982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.146164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.146188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.146317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.146355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.146457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.146482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.146583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.146608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.146786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.146813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.146890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.146916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.147003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.147029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.147124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.147163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.147328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.147353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.147481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.147505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.147636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.147661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.147777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.147804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.147902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.147928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.148068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.148092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.148219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.148258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.148415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.148440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.148554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.148579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.148691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.148739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.148865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.148892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.149024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.149049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.149158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.149183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.149321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.149345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.149513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.243 [2024-12-06 19:26:46.149538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.243 qpair failed and we were unable to recover it. 00:28:01.243 [2024-12-06 19:26:46.149637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.149662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.149812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.149838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.149961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.149987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.150092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.150132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.150208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.150232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.150393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.150417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.150532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.150556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.150681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.150743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.150852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.150879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.151034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.151058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.151196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.151221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.151377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.151401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.151571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.151595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.151701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.151748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.151851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.151878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.151961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.151987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.152104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.152130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.152267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.152291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.152410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.152434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.152569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.152593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.152733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.152775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.152921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.152947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.153039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.153079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.153223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.153248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.153354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.153378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.153512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.153536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.153660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.153685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.153788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.153814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.153915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.153946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.154091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.154115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.154268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.154292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.154414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.154453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.154604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.154642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.154772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.154799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.154930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.154955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.155115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.155152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.155317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.155341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.155466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.244 [2024-12-06 19:26:46.155490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.244 qpair failed and we were unable to recover it. 00:28:01.244 [2024-12-06 19:26:46.155624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.155648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.155818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.155845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.155965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.155991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.156093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.156132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.156298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.156337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.156464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.156501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.156635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.156659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.156825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.156865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.156982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.157007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.157140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.157164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.157301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.157325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.157500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.157525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.157691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.157737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.157836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.157861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.157977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.158018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.158163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.158187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.158315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.158339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.158451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.158479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.158641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.158666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.158809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.158836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.158956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.158982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.159087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.159112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.159274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.159298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.159459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.159484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.159607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.159632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.159786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.159812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.159930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.159956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.160144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.160168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.160302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.160325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.160481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.160519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.160612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.160637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.160780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.160820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.160939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.160964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.161144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.161167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.161293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.161331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.161468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.161492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.161662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.161687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.161825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.245 [2024-12-06 19:26:46.161851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.245 qpair failed and we were unable to recover it. 00:28:01.245 [2024-12-06 19:26:46.161968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.161994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.162092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.162116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.162226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.162251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.162390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.162415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.162541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.162565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.162693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.162758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.162925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.162954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.163064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.163088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.163224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.163248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.163376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.163400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.163525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.163550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.163692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.163736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.163898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.163923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.164072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.164111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.164248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.164272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.164404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.164428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.164552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.164577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.164755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.164781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.164866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.164892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.165024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.165049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.165162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.165186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.165297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.165321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.165452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.165477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.165635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.165660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.165819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.165845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.166032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.166072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.166208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.166231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.166356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.166380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.166516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.166541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.166666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.166690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.166857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.166883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.166989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.167034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.167191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.246 [2024-12-06 19:26:46.167215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.246 qpair failed and we were unable to recover it. 00:28:01.246 [2024-12-06 19:26:46.167378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.167402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.167573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.167598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.167736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.167761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.167892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.167917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.168117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.168141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.168277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.168301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.168499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.168523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.168670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.168693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.168837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.168876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.168977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.169001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.169139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.169163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.169313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.169337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.169528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.169551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.169653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.169678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.169829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.169855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.170003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.170028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.170198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.170221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.170444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.170468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.170600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.170624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.170776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.170802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.171023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.171061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.171222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.171246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.171400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.171425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.171608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.171632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.171918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.171943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.172139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.172163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.172348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.172372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.172579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.172602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.172740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.172765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.172857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.172882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.173116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.173143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.173282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.173316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.173458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.173497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.173652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.173676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.173884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.173910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.174089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.174128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.174268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.174292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.247 qpair failed and we were unable to recover it. 00:28:01.247 [2024-12-06 19:26:46.174409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.247 [2024-12-06 19:26:46.174442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.174566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.174590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.174705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.174762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.174934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.174958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.175143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.175173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.175297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.175335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.175481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.175520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.175704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.175734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.175917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.175941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.176114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.176139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.176315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.176343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.176480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.176504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.176761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.176786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.176936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.176961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.177126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.177165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.177342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.177365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.177575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.177599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.177739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.177764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.178023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.178048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.178187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.178224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.178375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.178399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.178555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.178594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.178779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.178804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.178931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.178970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.179083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.179123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.179249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.179273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.179406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.179431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.179565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.179590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.179729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.179754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.179906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.179946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.180147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.180171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.180322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.180349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.180528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.180552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.180733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.180773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.180956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.180980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.181123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.181147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.181335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.181359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.181510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.248 [2024-12-06 19:26:46.181534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.248 qpair failed and we were unable to recover it. 00:28:01.248 [2024-12-06 19:26:46.181796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.181821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.181966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.181990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.182117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.182155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.182280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.182305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.182441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.182466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.182608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.182632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.182834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.182859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.183020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.183059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.183169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.183194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.183356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.183380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.183461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.183485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.183621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.183646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.183857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.183892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.184016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.184041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.184227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.184251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.184459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.184482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.184582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.184620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.184739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.184764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.184892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.184917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.185046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.185070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.185194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.185218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.185340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.185364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.185478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.185503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.185651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.185675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.185906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.185932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.186106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.186130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.186332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.186356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.186494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.186518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.186672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.186697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.186876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.186902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.187045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.187069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.187302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.187327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.187468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.187492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.187691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.187715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.187892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.187918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.188120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.188156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.188306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.188330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.249 qpair failed and we were unable to recover it. 00:28:01.249 [2024-12-06 19:26:46.188528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.249 [2024-12-06 19:26:46.188552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.188701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.188746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.188960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.188985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.189122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.189161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.189312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.189336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.189476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.189500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.189631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.189656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.189768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.189794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.189951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.189976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.190159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.190183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.190309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.190333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.190475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.190499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.190704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.190733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.190841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.190866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.191082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.191107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.191274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.191297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.191391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.191416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.191585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.191609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.191815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.191839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.192006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.192031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.192164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.192204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.192385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.192409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.192598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.192622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.192745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.192771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.192984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.193011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.193137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.193161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.193264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.193289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.193404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.193429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.193556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.193581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.193709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.193741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.193890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.193915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.194107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.194131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.194300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.194323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.194487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.194511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.194613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.194637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.194780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.194822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.194954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.194980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.195113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.250 [2024-12-06 19:26:46.195137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.250 qpair failed and we were unable to recover it. 00:28:01.250 [2024-12-06 19:26:46.195273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.195298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.195457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.195481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.195639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.195663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.195886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.195912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.196045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.196085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.196201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.196225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.196360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.196384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.196549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.196574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.196709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.196753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.196839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.196864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.197022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.197046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.197215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.197239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.197363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.197402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.197534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.197562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.197690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.197717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.197845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.197869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.198074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.198113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.198292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.198316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.198447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.198485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.198680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.198704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.198853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.198893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.199120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.199143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.199308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.199332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.199461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.199500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.199632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.199656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.199819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.199845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.199996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.200035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.200190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.200214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.200342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.200381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.200508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.200532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.200649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.200673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.200902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.200927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.251 [2024-12-06 19:26:46.201094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.251 [2024-12-06 19:26:46.201132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.251 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.201309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.201333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.201509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.201533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.201662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.201687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.201694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:01.252 [2024-12-06 19:26:46.201831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.201856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.202091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.202117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.202243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.202283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.202398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.202438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.202643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.202666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.202847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.202872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.203075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.203099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.203250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.203274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.203513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.203537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.203765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.203790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.203905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.203930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.204077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.204116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.204292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.204326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.204442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.204482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.204570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.204594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.204708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.204763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.204892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.204918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.205066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.205090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.205274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.205298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.205406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.205434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.205593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.205619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.205735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.205761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.205879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.205905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.206094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.206118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.206227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.206250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.206361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.206385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.206555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.206580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.206711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.206752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.206870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.206895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.207010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.207035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.207202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.207227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.207336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.207365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.207491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.207515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.207732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.207758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.252 [2024-12-06 19:26:46.207864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.252 [2024-12-06 19:26:46.207889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.252 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.208036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.208061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.208186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.208211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.208396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.208420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.208553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.208577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.208779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.208805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.208919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.208944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.209072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.209096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.209255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.209280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.209420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.209444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.209621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.209646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.209782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.209823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.210017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.210042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.210145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.210169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.210298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.210322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.210497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.210535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.210680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.210703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.210853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.210879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.210982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.211008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.211138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.211162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.211322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.211360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.211535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.211558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.211652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.211677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.211886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.211934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.212160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.212190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.212335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.212359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.212578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.212622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.212786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.212811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.213003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.213027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.213142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.213166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.213347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.213386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.213544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.213568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.213666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.213690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.213822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.213848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.214011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.214051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.214167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.214205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.214327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.214352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.214483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.214507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.253 qpair failed and we were unable to recover it. 00:28:01.253 [2024-12-06 19:26:46.214654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.253 [2024-12-06 19:26:46.214679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.214867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.214893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.215047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.215071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.215297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.215321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.215457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.215481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.215635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.215673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.215798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.215824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.215952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.215977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.216083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.216107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.216244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.216269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.216424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.216449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.216622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.216646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.216844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.216870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.217093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.217127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.217291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.217315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.217445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.217470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.217703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.217749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.217917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.217942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.218102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.218126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.218280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.218304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.218542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.218566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.218763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.218788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.218919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.218943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.219141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.219165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.219318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.219342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.219486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.219524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.219694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.219718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.219862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.219887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.220031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.220056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.220212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.220237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.220357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.220382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.220563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.220588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.220734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.220759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.220919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.220945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.221053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.221077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.221270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.221293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.221421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.221446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.221585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.221609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.221746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.221786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.222066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.222132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.254 [2024-12-06 19:26:46.222330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.254 [2024-12-06 19:26:46.222367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.254 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.222524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.222562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.222707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.222766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.222971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.222996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.223161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.223186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.223293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.223319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.223507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.223557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.223690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.223715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.223859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.223885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.224109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.224143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.224349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.224384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.224545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.224576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.224734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.224760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.224924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.224950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.225149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.225172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.225373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.225397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.225554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.225579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.225716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.225773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.225940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.225965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.226088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.226126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.226289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.226312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.226511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.226535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.226692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.226738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.226834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.226859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.227029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.227054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.227223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.227248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.227465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.227489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.227621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.227645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.227779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.227816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.228058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.228097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.228281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.228314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.228464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.228487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.228648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.228673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.228862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.228889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.229079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.229103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.229312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.229336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.229475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.229499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.255 qpair failed and we were unable to recover it. 00:28:01.255 [2024-12-06 19:26:46.229640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.255 [2024-12-06 19:26:46.229664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.229810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.229836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.229955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.229981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.230111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.230150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.230279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.230307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.230477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.230501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.230633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.230670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.230798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.230824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.230963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.230989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.231125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.231149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.231350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.231374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.231487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.231512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.231653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.231678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.231911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.231938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.232046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.232084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.232290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.232325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.232422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.232460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.232561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.232586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.232729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.232769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.232892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.232918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.233153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.233177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.233389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.233413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.233556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.233580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.233701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.233760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.233898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.233938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.234053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.234078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.234271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.234294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.234452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.234476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.234655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.234678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.234793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.234820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.234938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.234964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.235186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.235225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.235384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.235408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.235611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.235636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.235781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.235808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.235923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.235949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.236071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.236097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.236313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.236339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.236460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.236499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.236652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.236677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.236811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.256 [2024-12-06 19:26:46.236838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.256 qpair failed and we were unable to recover it. 00:28:01.256 [2024-12-06 19:26:46.237043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.237067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.237267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.237305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.237449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.237472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.237644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.237684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.237826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.237852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.238008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.238048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.238185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.238210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.238348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.238374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.238492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.238519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.238676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.238701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.238843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.238869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.238987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.239029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.239172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.239197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.239368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.239392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.239492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.239517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.239703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.239736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.239882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.239908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.240060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.240089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.257 [2024-12-06 19:26:46.240234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.257 [2024-12-06 19:26:46.240260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.257 qpair failed and we were unable to recover it. 00:28:01.529 [2024-12-06 19:26:46.240458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.529 [2024-12-06 19:26:46.240485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.529 qpair failed and we were unable to recover it. 00:28:01.529 [2024-12-06 19:26:46.240665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.529 [2024-12-06 19:26:46.240690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.529 qpair failed and we were unable to recover it. 00:28:01.529 [2024-12-06 19:26:46.240833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.529 [2024-12-06 19:26:46.240858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.529 qpair failed and we were unable to recover it. 00:28:01.529 [2024-12-06 19:26:46.241015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.529 [2024-12-06 19:26:46.241040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.529 qpair failed and we were unable to recover it. 00:28:01.529 [2024-12-06 19:26:46.241182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.529 [2024-12-06 19:26:46.241207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.529 qpair failed and we were unable to recover it. 00:28:01.529 [2024-12-06 19:26:46.241342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.529 [2024-12-06 19:26:46.241367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.529 qpair failed and we were unable to recover it. 00:28:01.529 [2024-12-06 19:26:46.241559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.529 [2024-12-06 19:26:46.241584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.529 qpair failed and we were unable to recover it. 00:28:01.529 [2024-12-06 19:26:46.241749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.529 [2024-12-06 19:26:46.241775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.529 qpair failed and we were unable to recover it. 00:28:01.529 [2024-12-06 19:26:46.241910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.529 [2024-12-06 19:26:46.241951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.529 qpair failed and we were unable to recover it. 00:28:01.529 [2024-12-06 19:26:46.242075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.529 [2024-12-06 19:26:46.242100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.529 qpair failed and we were unable to recover it. 00:28:01.529 [2024-12-06 19:26:46.242205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.529 [2024-12-06 19:26:46.242230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.529 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.242385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.242411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.242518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.242544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.242688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.242735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.242878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.242904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.242989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.243014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.243163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.243189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.243365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.243390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.243517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.243542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.243740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.243769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.243916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.243942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.244098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.244122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.244239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.244264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.244438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.244478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.244621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.244647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.244762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.244788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.244938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.244963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.245109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.245135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.245352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.245376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.245497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.245521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.245649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.245673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.245887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.245912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.246078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.246102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.246193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.246217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.246357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.246382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.246558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.246582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.246767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.246793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.246945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.246969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.247136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.247160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.247299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.247338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.247592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.247616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.247749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.247773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.247891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.247916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.248123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.248163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.248318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.248342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.248501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.248540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.248668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.248707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.248833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.248858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.249017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.249042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.249182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.249221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.249360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.249385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.249604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.249628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.249823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.249849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.249998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.250041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.250292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.250316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.250520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.250544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.250672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.250697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.250921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.250946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.251118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.251142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.251312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.251336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.251503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.251527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.251656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.251695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.251821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.251848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.530 qpair failed and we were unable to recover it. 00:28:01.530 [2024-12-06 19:26:46.252001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.530 [2024-12-06 19:26:46.252028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.252261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.252285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.252414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.252452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.252649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.252680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.252874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.252899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.253053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.253078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.253257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.253280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.253384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.253409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.253540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.253564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.253742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.253768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.253985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.254009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.254180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.254203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.254334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.254374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.254583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.254618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.254793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.254817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.254997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.255035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.255195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.255219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.255351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.255375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.255504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.255528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.255653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.255677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.255823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.255849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.255978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.256020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.256227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.256263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.256411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.256435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.256619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.256643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.256847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.256872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.257000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.257024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.257228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.257252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.257394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.257418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.257549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.257573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.257705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.257738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.257872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.257897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.258103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.258128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.258265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.258304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.258488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.258512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.258666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.258689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.258814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.258851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.259022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.259062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.259247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.259271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.259456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.259480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.259625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.259649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.259788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.259815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.260020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.260055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.260222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.260246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.260378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.260415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.260561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.260600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.260827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.260853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.260986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.261011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.261173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.261211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.261316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.261341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.261463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.261487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.261597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.261622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.261751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.531 [2024-12-06 19:26:46.261777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.531 qpair failed and we were unable to recover it. 00:28:01.531 [2024-12-06 19:26:46.261999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.262039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.262133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.262156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.262315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.262339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.262559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.262582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.262745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.262786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.262967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.262991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.263155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.263179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.263372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.263396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.263598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.263622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.263762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.263786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.263901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.263926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.264069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.264094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.264206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.264231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.264372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.264396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.264532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.264557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.264678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.264702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.264883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.264923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.265038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.265063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.265185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.265209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.265328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.265353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.265449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.265489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.265640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.265664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.265891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.265924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.266068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.266093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.266311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.266335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.266431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.266456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.266601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.266625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.266754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.266781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.266923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.266949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.267133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.267157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.267319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.267343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.267518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.267542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.267766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.267791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.267943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.267966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.268113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.268137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.268319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.268342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.268510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.268533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.268664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.268703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.268906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.268942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.269089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.269129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.269284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.269309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.269502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.269526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.269737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.269762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.269908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.269934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.270072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.270111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.270206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.270235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.270366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.270390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.270547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.270586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.270817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.270843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.270971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.270995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.271198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.271222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.271351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.271376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.271484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.532 [2024-12-06 19:26:46.271509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.532 qpair failed and we were unable to recover it. 00:28:01.532 [2024-12-06 19:26:46.271673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.271697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.271899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.271924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.272070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.272112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.272259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.272298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.272450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.272474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.272685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.272727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.272875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.272901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.273095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.273120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.273275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.273299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.273426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.273450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.273561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.273586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.273701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.273729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.273956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.273981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.274148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.274173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.274304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.274328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.274437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.274461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.274666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.274691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.274868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.274894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.275042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.275081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.275220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.275247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.275347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.275371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.275522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.275547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.275682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.275706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.275844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.275871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.275992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.276031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.276175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.276199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.276335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.276359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.276473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.276498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.276610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.276635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.276793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.276819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.276930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.276955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.277116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.277154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.277319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.277342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.277527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.277550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.277715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.277744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.277879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.277904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.278070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.278095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.278189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.278229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.278376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.278401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.278510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.278535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.278638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.278663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.278825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.278862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.278981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.279021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.279125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.279150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.279292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.279316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.279521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.279571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.533 [2024-12-06 19:26:46.279758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.533 [2024-12-06 19:26:46.279788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.533 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.279939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.279963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.280044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.280068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.280209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.280234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.280376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.280401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.280576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.280600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.280776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.280801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.280968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.280993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.281131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.281169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.281412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.281436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.281580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.281604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.281764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.281790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.281933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.281958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.282058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.282083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.282232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.282257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.282430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.282454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.282582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.282607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.282809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.282835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.283002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.283041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.283196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.283220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.283401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.283441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.283602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.283625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.283795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.283834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.283978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.284016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.284117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.284146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.284309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.284334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.284450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.284474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.284603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.284627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.284748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.284774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.284913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.284938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.285044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.285069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.285198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.285224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.285365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.285390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.285528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.285552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.285680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.285731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.285860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.285887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.286037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.286081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.286208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.286232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.286390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.286428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.286562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.286601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.286746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.286773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.286918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.286944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.287043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.287068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.287211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.287235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.287406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.287430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.287558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.287583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.287674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.287698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.287874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.287899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.288032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.288071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.288191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.288216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.288345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.288370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.288511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.288535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.288674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.288699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.288867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.288893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.289033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.289058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.289204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.289243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.289379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.289404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.289541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.289566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.289690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.289737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.289874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.289899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.534 qpair failed and we were unable to recover it. 00:28:01.534 [2024-12-06 19:26:46.290018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.534 [2024-12-06 19:26:46.290043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.290192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.290231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.290359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.290383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.290558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.290596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.290742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.290766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.290903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.290929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.291049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.291074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.291205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.291244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.291367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.291395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.291523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.291547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.291681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.291706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.291847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.291872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.292017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.292042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.292184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.292209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.292341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.292366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.292498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.292522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.292688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.292743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.292850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.292876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.293007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.293032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.293169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.293194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.293308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.293333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.293445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.293471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.293577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.293603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.293730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.293757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.293876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.293903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.293986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.294012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.294131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.294156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.294274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.294301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.294451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.294478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.294597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.294623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.294743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.294770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.294860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.294887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.295013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.295039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.295193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.295221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.295342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.295369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.295523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.295553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.295703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.295734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 [2024-12-06 19:26:46.295717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.295772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.535 [2024-12-06 19:26:46.295802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.535 [2024-12-06 19:26:46.295827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.535 [2024-12-06 19:26:46.295849] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.535 [2024-12-06 19:26:46.295857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.295899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.296033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.296060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.296168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.296196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.296329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.296367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.296676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.296703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.296806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.296834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.296988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.297014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.297236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.297270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.297442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.297468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.297570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.297596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.297702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.297738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.297864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.297891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.298045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.298072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.298237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.298275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.298306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:01.535 [2024-12-06 19:26:46.298408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.298362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:01.535 [2024-12-06 19:26:46.298435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.298422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:01.535 [2024-12-06 19:26:46.298433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:01.535 [2024-12-06 19:26:46.298586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.298611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.298802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.298845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.298974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.299002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.299110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.299145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.299240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.299267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.299413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.299439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.299566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.299596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.299728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.299756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.299877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.299906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.300022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.300049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.300155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.300181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.300311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.300338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.535 qpair failed and we were unable to recover it. 00:28:01.535 [2024-12-06 19:26:46.300488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.535 [2024-12-06 19:26:46.300516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.300638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.300666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.300802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.300838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.300971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.300997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.301133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.301160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.301291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.301318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.301467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.301493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.301635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.301662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.301822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.301855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.302020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.302046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.302223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.302255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.302373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.302400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.302562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.302589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.302729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.302756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.302870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.302908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.303001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.303027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.303169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.303205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.303341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.303367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.303547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.303573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.303738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.303765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.303927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.303953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.304083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.304109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.304230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.304257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.304376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.304401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.304507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.304534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.304672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.304698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.304834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.304861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.305005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.305032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.305198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.305224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.305387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.305426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.305581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.305608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.305708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.305742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.305922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.305948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.306103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.306141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.306266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.306294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.306442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.306469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.306674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.306701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.306829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.306856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.306959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.306986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.307075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.307103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.307231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.307257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.307363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.307390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.307500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.307526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.307692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.307718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.307823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.307850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.308001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.308027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.308130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.308156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.308288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.308314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.308508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.308549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.308705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.308740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.308933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.308960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.309110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.309137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.309293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.309320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.309442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.309469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.309571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.309597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.309708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.309753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.309927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.309954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.310082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.310108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.310230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.310256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.310345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.310381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.310475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.310501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.310634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.310661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.310787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.310823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.310948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.310975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.311126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.311152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.311373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.311418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.311549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.311577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.536 [2024-12-06 19:26:46.311755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.536 [2024-12-06 19:26:46.311783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.536 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.311883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.311910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.312029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.312059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.312207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.312233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.312352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.312381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.312486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.312513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.312603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.312630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.312747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.312774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.312924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.312958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.313138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.313175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.313307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.313334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.313417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.313443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.313566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.313592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.313712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.313754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.313863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.313890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.314006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.314032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.314131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.314158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.314268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.314294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.314411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.314437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.314526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.314553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.314678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.314705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.314805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.314832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.314940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.314968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.315089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.315116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.315236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.315263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.315358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.315385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.315508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.315535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.315655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.315681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.315840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.315868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.316033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.316059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.316184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.316210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.316356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.316382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.316476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.316503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.316624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.316651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.316776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.316803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.316907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.316937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.317068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.317095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.317215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.317242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.317359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.317385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.317478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.317507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.317619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.317645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.317754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.317783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.317927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.317953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.318196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.318223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.318411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.318440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.318596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.318626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.318718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.318749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.318912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.318938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.319039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.319065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.319205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.319233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.319417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.319445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.319585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.319612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.319782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.319810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.319903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.319930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.320025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.537 [2024-12-06 19:26:46.320060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.537 qpair failed and we were unable to recover it. 00:28:01.537 [2024-12-06 19:26:46.320238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.320268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.320434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.320463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.320549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.320575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.320701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.320736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.320857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.320884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.320990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.321016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.321203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.321230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.321371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.321397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.321585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.321612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.321806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.321833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.321964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.321990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.322185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.322212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.322404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.322431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.322550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.322577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.322671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.322697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.322860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.322886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.322992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.323030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.323156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.323182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.323262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.323289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.323447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.323474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.323625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.323651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.323820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.323847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.323966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.323992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.324120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.324149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.324269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.324295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.324400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.324426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.324595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.324621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.324707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.324738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.324856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.324882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.324988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.325014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.325169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.325195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.325314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.325340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.325462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.325488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.325663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.325690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.325817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.325844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.326084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.326129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.326271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.326300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.326463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.326491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.326630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.326656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.326813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.326840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.326985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.327012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.327190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.327217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.327386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.327421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.327554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.327581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.327689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.327715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.327831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.327858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.328010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.328037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.328184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.328211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.328327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.328354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.328510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.328538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.328724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.328751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.328854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.328880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.329012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.538 [2024-12-06 19:26:46.329039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.538 qpair failed and we were unable to recover it. 00:28:01.538 [2024-12-06 19:26:46.329195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.329221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.329379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.329406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.329588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.329615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.329775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.329802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.329972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.329998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.330174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.330201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.330320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.330348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.330463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.330489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.330645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.330677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.330847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.330885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.331015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.331041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.331232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.331260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.331413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.331440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.331656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.331683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.331816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.331842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.331947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.331973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.332061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.332088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.332167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.332193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.332350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.332376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.332482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.332519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.332625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.332653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.332768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.332795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.332955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.332982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.333186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.333212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.333355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.333380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.333503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.333530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.333651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.333677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.333797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.333824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.333983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.334010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.334179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.334204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.334320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.334347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.334466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.334493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.334596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.334623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.334755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.334782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.334888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.334927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.335050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.335077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.335253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.335280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.335431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.335469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.335643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.335687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.335831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.335860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.335983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.336010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.336108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.336135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.336261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.336288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.336388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.336414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.336527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.336554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.336708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.336741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.336860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.336887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.337011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.337038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.337122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.337149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.337244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.337271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.337395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.337422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.337536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.337564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.337678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.337707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.337862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.337892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.337988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.338025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.338138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.338166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.338257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.338283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.338373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.338401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.338519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.338546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.338673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.338699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.338885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.338913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.339030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.339057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.339209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.339235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.339346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.339372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.339492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.339521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.339609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.339645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.339805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.339833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.339949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.339976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.340099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.340126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.340238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.340265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.340376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.340402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.539 [2024-12-06 19:26:46.340562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.539 [2024-12-06 19:26:46.340588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.539 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.340680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.340714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.340830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.340856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.340979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.341006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.341126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.341154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.341260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.341287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.341369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.341396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.341540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.341566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.341680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.341706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.341862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.341890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.342006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.342032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.342155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.342181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.342295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.342322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.342414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.342444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.342571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.342607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.342764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.342791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.342894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.342920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.343035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.343061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.343207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.343244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.343391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.343417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.343571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.343598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.343730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.343756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.343908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.343935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.344017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.344044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.344277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.344303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.344421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.344448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.344570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.344596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.344717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.344751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.344922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.344949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.345070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.345098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.345207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.345234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.345380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.345406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.345529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.345556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.345666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.345695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.345850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.345880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.346027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.346054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.346146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.346172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.346288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.346314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.346465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.346492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.346614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.346640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.346737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.346764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.346908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.346935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.347050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.347077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.347196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.347223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.347344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.347371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.347500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.347531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.347617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.347643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.347768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.347794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.347910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.347938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.348034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.348060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.348172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.348199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.348322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.348349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.348439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.348465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.348610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.348637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.348735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.348764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.348887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.348916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.349037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.349064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.349157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.349184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.349305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.349331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.349460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.349487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.349614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.349642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.349733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.349760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.349881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.349907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.350029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.350056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.350155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.350181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.350278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.350304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.350446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.350473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.350622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.350648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.350793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.350822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.350912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.350941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.351052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.540 [2024-12-06 19:26:46.351078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.540 qpair failed and we were unable to recover it. 00:28:01.540 [2024-12-06 19:26:46.351182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.351207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.351327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.351354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.351473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.351500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.351591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.351618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.351707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.351742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.351823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.351850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.351968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.351994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.352088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.352115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.352194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.352220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.352306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.352332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.352477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.352503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.352616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.352643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.352738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.352765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.352915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.352941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.353094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.353121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.353240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.353267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.353411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.353438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.353559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.353586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.353708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.353739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.353858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.353887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.353997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.354023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.354136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.354163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.354281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.354307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.354385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.354413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.354538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.354565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.354689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.354715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.354819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.354845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.354970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.354997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.355115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.355141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.355242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.355268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.355388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.355414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.355501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.355528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.355638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.355664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.355766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.355793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.355918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.355945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.356040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.356066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.356159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.356185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.356297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.356323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.356471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.356497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.356646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.356672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.356820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.356849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.357001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.357027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.357113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.357144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.357293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.357320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.357469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.357496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.357607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.357635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.357727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.357753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.357877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.357904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.358026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.358052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.358146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.358172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.358328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.358355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.358467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.358493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.358588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.358617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.358713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.358745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.358857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.358884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.359003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.359029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.359127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.359153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.359271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.359297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.359443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.359469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.359592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.359618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.359744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.541 [2024-12-06 19:26:46.359771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.541 qpair failed and we were unable to recover it. 00:28:01.541 [2024-12-06 19:26:46.359890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.359919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.360066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.360094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.360240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.360269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.360361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.360387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.360509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.360536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.360680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.360707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.360833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.360859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.360956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.360983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.361097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.361128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.361273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.361302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.361414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.361440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.361562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.361588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.361712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.361744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.361870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.361896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.362024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.362050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.362173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.362200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.362293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.362321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.362436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.362462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.362557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.362583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.362673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.362699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.362824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.362851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.362964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.362991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.363118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.363144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.363258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.363284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.363403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.363429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.363552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.363581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.363704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.363738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.363832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.363861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.363972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.363998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.364110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.364137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.364227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.364254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.364399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.364425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.364572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.364599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.364730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.364757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.364876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.364903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.365054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.365084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.365204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.365230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.365379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.365406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.365525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.365553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.365671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.365697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.365854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.365881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.365998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.366025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.366172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.366201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.366317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.366343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.366457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.366484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.366630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.366656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.366840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.366867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.366968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.366996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.367147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.367173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.367301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.367328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.367451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.367478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.367683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.367709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.367850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.367876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.368021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.368048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.368156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.368184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.368369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.368395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.368514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.368540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.368639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.368666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.368862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.368890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.369015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.369041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.369137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.369163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.369303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.369330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.369464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.369490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.369668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.369695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.369847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.369873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.369970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.369999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.370094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.370120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.370268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.370294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.370424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.370450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.370575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.370602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.370777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.370804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.370955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.370981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.371205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.371241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.542 qpair failed and we were unable to recover it. 00:28:01.542 [2024-12-06 19:26:46.371362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.542 [2024-12-06 19:26:46.371388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.371511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.371538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.371685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.371711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.371846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.371873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.371994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.372020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.372143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.372172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.372284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.372309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.372457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.372484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.372580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.372607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.372728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.372755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.372853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.372879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.373003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.373030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.373231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.373261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.373406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.373432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.373630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.373656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.373828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.373857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.374063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.374090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.374219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.374245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.374360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.374387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.374482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.374509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.374607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.374633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.374756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.374786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.374871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.374898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.374997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.375024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.375120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.375146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.375261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.375287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.375435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.375464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.375613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.375639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.375769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.375797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.375934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.375962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.376079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.376109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.376236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.376262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.376346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.376372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.376457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.376486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.376573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.376599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.376691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.376718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.376828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.376858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.376946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.376972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.377094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.377121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.377265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.377292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.377410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.377436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.377527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.377554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.377661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.377687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.377844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.377871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.377977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.378003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.378146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.378173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.378323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.378351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.378525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.378563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.378717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.378778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.378864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.378893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.379020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.379046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.379199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.379225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.379320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.379347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.379462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.379488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.379585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.379611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.379700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.379734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.379884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.379911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.380103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.380136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.380271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.380297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.380459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.380485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.380568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.380594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.380691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.380717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.380863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.380890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.543 [2024-12-06 19:26:46.381037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.543 [2024-12-06 19:26:46.381063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.543 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.381145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.381171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.381256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.381289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.381446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.381472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.381602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.381628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.381740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.381767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.381960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.381986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.382108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.382135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.382233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.382262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.382372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.382398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.382488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.382526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.382659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.382685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.382894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.382921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.383071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.383098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.383293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.383319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.383484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.383513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.383685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.383713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.383860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.383889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.384071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.384097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.384216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.384242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.384364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.384391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.384515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.384541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.384675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.384702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.384835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.384864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.384967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.384993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.385088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.385114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.385234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.385260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.385385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.385415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.385503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.385529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.385652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.385678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.385786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.385813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.385940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.385967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.386055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.386084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.386185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.386211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.386332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.386358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.386453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.386482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.386604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.386630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.386742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.386769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.386875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.386901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.386996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.387023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.387114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.387140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.387264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.387291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.387418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.387444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.387593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.387620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.387715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.387747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.387842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.387872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.387971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.387998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.388147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.388173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.388255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.388281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.388400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.388427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.388527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.388554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.388706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.388737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.388839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.388865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.388962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.388989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.389112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.389140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.389258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.389284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.389376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.389402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.389554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.389581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.389729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.389756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.389848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.389876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.390000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.390027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.390181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.390207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.390328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.390359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.390485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.390512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.390631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.390660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.390754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.390781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.390881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.390907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.391003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.391030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.391149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.391177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.391259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.391285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.391401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.391428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.391548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.391575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.391666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.391693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.391822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.544 [2024-12-06 19:26:46.391848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.544 qpair failed and we were unable to recover it. 00:28:01.544 [2024-12-06 19:26:46.391930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.391956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.392040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.392067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.392190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.392217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.392337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.392363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.392458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.392485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.392608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.392636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.392724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.392751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.392875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.392901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.393018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.393046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.393192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.393218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.393328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.393355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.393444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.393471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.393592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.393618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.393769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.393796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.393891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.393918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.394008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.394039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.394152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.394179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.394309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.394335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.394429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.394455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.394576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.394602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.394752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.394779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.394881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.394907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.394993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.395021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.395112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.395138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.395238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.395266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.395377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.395404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.395520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.395546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.395689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.395716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.395844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.395870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.395964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.395993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.396144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.396171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.396260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.396287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.396370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.396397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.396486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.396512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.396627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.396654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.396738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.396765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.396867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.396893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.396979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.397005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.397129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.397157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.397280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.397309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.397398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.397425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.397536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.397563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.397655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.397688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.397792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.397819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.397937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.397964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.398083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.398110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.398223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.398249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.398382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.398409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.398497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.398523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.398607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.398633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.398777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.398804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.398925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.398952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.399092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.399119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.399240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.399268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.399361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.399387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.399483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.399510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.399560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ea570 (9): Bad file descriptor 00:28:01.545 [2024-12-06 19:26:46.399750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.399793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.399926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.399956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.400083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.400111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.400210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.400237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.400359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.400387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.400486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.400513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.400602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.400629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.400773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.400801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.400947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.400974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.401096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.401122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.401216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.401244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.401389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.401416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.401564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.401592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.401690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.401718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.401825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.401852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.401947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.401976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.402122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.545 [2024-12-06 19:26:46.402151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.545 qpair failed and we were unable to recover it. 00:28:01.545 [2024-12-06 19:26:46.402263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.402289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.402384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.402413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.402511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.402538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.402616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.402642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.402799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.402827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.402921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.402948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.403094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.403121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.403242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.403269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.403388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.403414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.403511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.403542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.403634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.403661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.403765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.403792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.403915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.403942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.404062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.404088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.404206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.404232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.404350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.404377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.404476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.404504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.404597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.404624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.404747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.404774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.404899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.404926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.405044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.405071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.405190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.405217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.405335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.405363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.405457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.405485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.405596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.405623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.405772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.405800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.405923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.405950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.406070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.406097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.406250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.406278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.406400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.406426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.406573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.406599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.406744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.406771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.406858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.406884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.407006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.407033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.407177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.407204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.407326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.407353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.407439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.407470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.407617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.407645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.407768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.407796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.407921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.407947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.408064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.408091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.408212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.408239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.408325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.408352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.408473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.408501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.408625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.408651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.408743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.408770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.408890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.408917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.409043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.409069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.409215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.409242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.409364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.409390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.409544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.409571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.409692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.409718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.409847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.409873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.409986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.410012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.410161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.410187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.410302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.410328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.410437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.410463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.410556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.410582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.410687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.410739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.410846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.410875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.411000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.411027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.411144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.411171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.411297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.411324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.411416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.411450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.411569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.411596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.411715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.411752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.411899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.411926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.412045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.546 [2024-12-06 19:26:46.412071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.546 qpair failed and we were unable to recover it. 00:28:01.546 [2024-12-06 19:26:46.412215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.412242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.412361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.412387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.412479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.412505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.412625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.412652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.412767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.412794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.412877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.412904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.413050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.413077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.413166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.413192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.413283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.413310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.413433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.413459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.413600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.413626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.413765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.413806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.413935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.413965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.414111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.414137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.414228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.414256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.414351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.414378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.414457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.414484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.414575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.414602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.414751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.414778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.414901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.414927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.415023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.415049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.415170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.415196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.415278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.415308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.415397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.415423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.415513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.415539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.415634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.415660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.415810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.415838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.415960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.415987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.416109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.416135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.416229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.416256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.416407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.416433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.416515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.416542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.416687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.416714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.416846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.416872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.417018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.417044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.417131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.417157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.417280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.417307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.417405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.417431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.417526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.417552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.417672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.417698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.417797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.417824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.417909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.417935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.418023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.418049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.418160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.418186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.418308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.418334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.418444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.418470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.418614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.418640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.418736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.418764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.418881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.418907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.419019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.419050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.419140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.419167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.419286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.419312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.419402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.419429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.419576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.419602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.419718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.419755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.419873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.419899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.419990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.420016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.420134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.420160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.420245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.420272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.420394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.420420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.420548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.420574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.420667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.420693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.420816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.420843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.420967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.420994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.421112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.421139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.421229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.421255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.421350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.421376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.421498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.421524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.421630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.421656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.421808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.421835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.421928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.421954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.422073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.422099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.422213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.547 [2024-12-06 19:26:46.422240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.547 qpair failed and we were unable to recover it. 00:28:01.547 [2024-12-06 19:26:46.422356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.422382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.422501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.422528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.422676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.422703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.422834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.422865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.422947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.422973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.423090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.423116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.423236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.423263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.423385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.423411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.423508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.423535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.423625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.423651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.423766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.423793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.423942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.423969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.424060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.424086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.424206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.424232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.424351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.424378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.424500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.424526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.424634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.424675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.424823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.424853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.425003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.425030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.425176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.425204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.425352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.425380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.425527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.425554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.425697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.425729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.425851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.425878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.426024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.426050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.426136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.426162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.426288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.426314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.426430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.426455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.426592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.426632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.426775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.426817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.426952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.426986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.427133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.427159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.427307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.427334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.427480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.427507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.427670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.427710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.427850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.427891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.428013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.428040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.428137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.428164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.428256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.428283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.428426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.428452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.428589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.428629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.428776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.428816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.428960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.428988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.429133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.429159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.429261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.429288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.429406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.429433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.429530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.429571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.429730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.429770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.429925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.429952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.430039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.430065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.430217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.430244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.430366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.430392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.430550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.430577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.430690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.430739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.430899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.430928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.431077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.431104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.431199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.431226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.431334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.431361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.431476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.431503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.431596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.431623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.431744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.431773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.431920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.431948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.432042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.432067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.432189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.432214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.432336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.432362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.432489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.432515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.432634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.432661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.432782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.432810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.548 [2024-12-06 19:26:46.432928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.548 [2024-12-06 19:26:46.432954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.548 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.433097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.433122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.433241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.433268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.433401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.433427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.433515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.433542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.433662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.433688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.433826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.433867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.434011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.434053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.434210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.434238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.434320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.434348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.434466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.434493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.434614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.434641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.434766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.434794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.434879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.434905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.435055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.435095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.435197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.435224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.435349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.435377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.435499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.435526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.435619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.435647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.435809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.435849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.435981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.436008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.436158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.436185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.436277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.436304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.436396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.436423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.436536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.436562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.436685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.436713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.436872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.436899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.436995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.437022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.437135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.437161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.437283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.437316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.437404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.437431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.437555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.437581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.437681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.437728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.437854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.437882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.437997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.438023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.438146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.438173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.438267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.438293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.438376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.438402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.438551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.438578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.438669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.438696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.438790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.438817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.438931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.438958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.439051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.439078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.439206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.439232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.439356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.439384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.439466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.439493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.439637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.439663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.439798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.439829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.439952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.439979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.440065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.440091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.440208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.440235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.440349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.440375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.440462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.440488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.440605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.440632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.440763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.440804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.440894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.440921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.441069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.441096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.441185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.441211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.441334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.441360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.441449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.441475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.441602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.441643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.441806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.441836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.441985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.442013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.442158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.442185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.442306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.442333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.442479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.442506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.442595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.442622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.442742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.442789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.442922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.442949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.443095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.443121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.443247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.443273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.443400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.443426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.443562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.443591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.549 qpair failed and we were unable to recover it. 00:28:01.549 [2024-12-06 19:26:46.443734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.549 [2024-12-06 19:26:46.443775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.443911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.443951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.444106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.444134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.444254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.444280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.444398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.444425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.444573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.444600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.444710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.444759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.444915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.444943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.445091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.445119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.445239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.445266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.445420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.445447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.445565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.445592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.445756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.445797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.445922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.445950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.446070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.446097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.446213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.446240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.446334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.446360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.446474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.446500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.446667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.446707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.446885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.446926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f592c000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.447088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.447116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.447245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.447273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.447472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.447499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.447640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.447672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.447778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.447805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.447900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.447927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.448052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.448078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.448205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.448232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.448381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.448407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.448593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.448620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.448737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.448776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.448910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.448937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.449092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.449118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.449244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.449271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.449419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.449445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.449571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.449599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.449756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.449808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.449971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.450011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.450153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.450181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.450311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.450337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.450474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.450509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.450699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.450733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.450860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.450887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.451017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.451046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.451136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.451163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.451391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.451417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.451546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.451573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.451712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.451754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.451885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.451912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.452048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.452074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.452221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.452253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.452357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.452383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.452528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.452554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.452669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.452695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.452823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.452850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.452950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.452976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.453102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.453129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.453234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.453261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.453359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.453396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.453558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.453584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.453737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.453764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.453890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.453916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.454094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.454121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.454281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.454308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.454437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.454464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.454577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.454604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.454701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.454735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.454833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.454859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.454952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.454979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.455093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.455120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.455277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.455304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.455478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.455505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.455645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.455671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.550 [2024-12-06 19:26:46.455812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.550 [2024-12-06 19:26:46.455839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.550 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.455926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.455953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.456088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.456114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.456281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.456307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.456449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.456475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.456626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.456653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.456752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.456784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.456882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.456909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.457005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.457043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.457208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.457235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.457354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.457380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.457564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.457591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.457764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.457791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.457882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.457908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.458036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.458063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.458196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.458222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.458332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.458359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.458492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.458518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.458688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.458715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.458838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.458864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.458990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.459016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.459110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.459136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.459250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.459276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.459422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.459448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.459544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.459570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.459731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.459757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.459848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.459874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.460003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.460030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.460124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.460161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.460293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.460320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.460448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.460475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.460615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.460641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.460768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.460795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.460891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.460918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.461000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.461026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.461170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.461197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.461335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.461361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.461445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.461471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.461575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.461602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.461756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.461783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.461902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.461928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.462084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.462111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.462229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.462256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.462384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.462411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.462534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.462561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.462732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.462763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.462879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.462905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.463054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.463081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.463198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.463237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.463391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.463418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.463547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.463573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.463703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.463738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.463868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.463895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.463986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.464012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.464140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.464166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.464309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.464336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.464455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.464481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.464618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.464644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.464814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.464841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.464932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.464959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.465091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.465117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.465215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.465242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.465395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.465424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.465552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.465578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.465716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.465751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.465904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.465942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.466041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.466067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.466151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.466188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.551 [2024-12-06 19:26:46.466301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.551 [2024-12-06 19:26:46.466336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.551 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.466455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.466481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.466598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.466624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.466715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.466746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.466898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.466929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.467064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.467091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.467237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.467264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.467399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.467425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.467551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.467578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.467744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.467776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.467899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.467926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.468040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.468066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.468159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.468185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.468390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.468416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.468507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.468534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.468651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.468677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.468804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.468831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.468914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.468941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.469163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.469204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.469375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.469403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.469548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.469574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.469728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.469756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.469842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.469868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.470042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.470069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.470206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.470233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.470380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.470406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.470549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.470576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.470695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.470743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.470883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.470911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.471109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.471135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.471255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.471282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.471406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.471445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.471637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.471663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.471824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.471851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.471974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.472001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.472094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.472121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.472269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.472296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.472415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.472441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.472561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.472587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.472737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.472764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.472956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.472983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.473133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.473159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.473280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.473306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.473407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.473433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.473584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.473610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.473714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.473749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.473897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.473924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.474049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.474075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.474166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.474193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.474311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.474337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.474479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.474505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.474627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.474653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.474783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.474810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.474931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.474957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.475076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.475103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.475226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.475252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.475396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.475433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.475521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.475547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.475685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.475711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.475849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.475876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.475995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.476021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.476140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.476166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.476331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.476357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.476474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.476501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.476649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.476675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.476799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.476825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.476915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.476941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.477088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.477114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.477204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.477231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.477344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.477379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.477494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.477520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.477609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.477635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.477773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.477813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.477936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.552 [2024-12-06 19:26:46.477964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.552 qpair failed and we were unable to recover it. 00:28:01.552 [2024-12-06 19:26:46.478057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.478083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.478227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.478255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.478375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.478401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.478521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.478548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.478665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.478692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.478821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.478848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.478941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.478967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.479065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.479091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.479168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.479194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.479314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.479341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.479434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.479462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.479612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.479637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.479791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.479818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.479910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.479937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.480048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.480074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.480204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.480230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.480352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.480380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.480498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.480525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.480616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.480643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.480759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.480787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.480906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.480932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.481051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.481077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.481161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.481188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.481302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.481328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.481429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.481455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.481581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.481609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.481733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.481759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.481894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.481920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.482038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.482064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.482215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.482241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.482335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.482362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.482457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.482484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.482601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.482627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.482713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.482745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.482861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.482888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.482998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.483024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.483158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.483185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.483329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.483355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.483473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.483504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.483624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.483651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.483737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.483766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.483852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.483878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.483997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.484023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.484168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.484194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.484370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.484397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.484518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.484543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.484691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.484718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.484846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.484873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.485021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.485047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.485200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.485227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.485314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.485340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.485489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.485515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.485613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.485640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.485790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.485817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.485933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.485959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.486095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.486121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.486267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.486293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.486444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.486469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.486616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.486642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.486761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.486789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.486908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.486934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.487051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.487077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.487196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.487223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.487318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.487344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.487459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.487485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.487633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.487664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.487794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.487820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.487919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.487945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.488102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.553 [2024-12-06 19:26:46.488129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.553 qpair failed and we were unable to recover it. 00:28:01.553 [2024-12-06 19:26:46.488249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.488274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.488365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.488391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.488534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.488560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.488709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.488741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.488869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.488895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.489025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.489051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.489195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.489221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.489344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.489371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.489462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.489488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.489573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.489599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.489737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.489778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.489920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.489961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.490085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.490114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.490241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.490268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.490471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.490499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.490620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.490647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.490740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.490769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.490890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.490917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.491036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.491063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.491191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.491217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.491361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.491388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.491481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.491508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.491616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.491645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.491799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.491834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.491962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.491990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.492138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.492166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.492284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.492311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.492430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.492457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.492578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.492606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.492755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.492782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.492907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.492933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.493041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.493079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.493165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.493192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.493286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.493311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.493434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.493462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.493596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.493636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.493735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.493764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.493898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.493925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.494006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.494032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.494147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.494173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.494285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.494313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.494501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.494529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.494668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.494696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.494803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.494830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.494957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.494984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.495136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.495164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.495296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.495323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.495482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.495510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.495630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.495656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.495793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.495820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.495970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.496001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.496137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.496163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.496307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.496333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.496479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.496507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.496656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.496683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.496868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.496909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.497011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.497051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.497191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.497218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.497370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.497397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.497519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.497546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.497690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.497717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.497850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.497877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.497993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.498020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.498163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.498189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.498312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.498339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.498476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.498502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.498659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.498685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.498911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.498953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.499156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.499185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.499319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.499358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.499516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.499548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.499654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.499694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.554 qpair failed and we were unable to recover it. 00:28:01.554 [2024-12-06 19:26:46.499877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.554 [2024-12-06 19:26:46.499905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.500033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.500059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.500214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.500252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.500399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.500428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.500522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.500549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.500686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.500742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.500849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.500878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.501021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.501048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.501166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.501193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.501339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.501367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.501564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.501591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.501739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.501766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.501875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.501913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.501996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.502024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.502120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.502146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.502285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.502312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.502438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.502464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.502578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.502603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.502763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.502791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.502926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.502952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.503079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.503107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.503270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.503296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.503420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.503448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.503542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.503569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.503717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.503753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.503889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.503916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.504037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.504064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.504195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.504222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.504397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.504424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.504542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.504569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.504760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.504788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.504972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.505000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.505101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.505128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.505274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.505300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.505444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.505471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.505638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.505665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.505815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.505843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.505958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.505985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.506119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.506146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.506241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.506278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.506464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.506492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.506610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.506637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.506777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.506805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.506977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.507004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.507138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.507164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.507332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.507375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.507469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.507496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.507621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.507648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.507743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.507771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.507899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.507926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.508035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.508061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.508183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.508210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.508303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.508331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.508451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.508478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.508644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.508671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.508756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.508783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.508918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:01.555 [2024-12-06 19:26:46.508946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:01.555 [2024-12-06 19:26:46.509103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.509131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.509306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:01.555 [2024-12-06 19:26:46.509333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.509481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:01.555 [2024-12-06 19:26:46.509509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.555 [2024-12-06 19:26:46.509658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.509685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.509822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.509849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.510035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.510063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.510164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.510192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.510339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.555 [2024-12-06 19:26:46.510366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.555 qpair failed and we were unable to recover it. 00:28:01.555 [2024-12-06 19:26:46.510567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.510594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.510705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.510740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.510882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.510909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.511013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.511040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.511173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.511201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.511403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.511431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.511594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.511621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.511763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.511795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.511935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.511961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.512087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.512114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.512235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.512262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.512372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.512401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.512498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.512525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.512673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.512701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.512826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.512853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.512985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.513012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.513210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.513236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.513362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.513389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.513561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.513591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.513727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.513755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.513864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.513891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.514044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.514071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.514209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.514236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.514366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.514393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.514549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.514576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.514733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.514772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.514871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.514910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.515031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.515058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.515188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.515214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.515363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.515390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.515512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.515539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.515650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.515677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.515814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.515842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.515970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.515996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.516126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.516153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.516297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.516334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.516464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.516494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.516655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.516682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.516800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.516841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.516966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.516994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.517138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.517164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.517310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.517336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.517476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.517501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.517636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.517663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.517790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.517818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.517916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.517944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.518088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.518115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.518234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.518262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.518393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.518420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.518563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.518592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.518706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.518764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.518892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.518920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.519071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.519097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.519201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.519227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.519356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.519382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.519527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.519552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.519705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.519766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.519901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.519930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.520065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.520099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.556 qpair failed and we were unable to recover it. 00:28:01.556 [2024-12-06 19:26:46.520257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.556 [2024-12-06 19:26:46.520285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.520446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.520474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.520609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.520636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.520766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.520803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.520900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.520928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.521080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.521107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.521247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.521274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.521405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.521432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.521534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.521562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.521713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.521775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.521886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.521914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.522017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.522043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.522184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.522211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.522332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.522358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.522460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.522485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.522631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.522656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.522782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.522809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.522905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.522932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.523079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.523106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.523276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.523302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.523401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.523427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.523572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.523600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.523759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.523788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.523883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.523910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.524005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.524033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.524183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.524210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.524366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.524393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.524487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.524514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.524631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.524658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.524777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.524805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.524897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.524924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.525016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.525043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.525138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.525165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.525249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.525276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.525413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.525440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.525590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.525617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.525767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.525795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.525901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.525929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.526051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.526078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.526199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.526230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.526379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.526406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.526520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.526547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.526677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.526704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.526812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.526839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.526927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.526954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.527101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.527129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.527234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.527260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.527405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.527443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.527534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.527562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.527710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.527751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.527856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.527883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.527983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.528010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.528201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.528228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.528353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.528380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.528534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.528562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.528678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.528705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.528816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.528844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.528944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.528971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.529058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.529085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.529224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.529251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.529346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.529383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.529506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.529533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.529625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.529652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.529768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.529796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.529891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.529918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.530091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.530119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.530230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.530257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.530391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.530418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.530612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.530639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.530812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.530840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.530933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.530960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.531066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.557 [2024-12-06 19:26:46.531093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.557 qpair failed and we were unable to recover it. 00:28:01.557 [2024-12-06 19:26:46.531218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.531244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.531346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.531374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.531529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.531556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.531719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.531754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.558 [2024-12-06 19:26:46.531851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.531879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.558 [2024-12-06 19:26:46.531974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.532007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.532168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.532196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.532374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.532401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.532562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.532589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.532727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.532754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.532849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.532876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.533006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.533033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.533133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.533160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.533281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.533308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.533467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.533494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.533604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.533631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.533788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.533815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.533919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.533946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.534092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.534131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.534240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.534278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.534372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.534398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.534562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.534589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.534742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.534779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.534873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.534900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.534983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.535010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.535132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.535159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.535334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.535362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.535502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.535529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.535731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.535758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.535859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.535886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.535980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.536007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.536095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.536122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.536253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.536279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.536385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.536412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.536585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.536612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.536759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.536795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.536899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.536926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.537076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.537103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.537226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.537252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.537453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.537480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.537620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.537652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.537792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.537820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.537917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.537944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.538107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.538134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.538233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.538258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.538415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.538447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.538590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.538618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.538789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.538816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.538941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.538968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.539142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.539180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.539313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.539340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.539491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.539519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.539678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.539705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.539825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.539852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.539951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.539979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.540079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.540105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.540248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.540274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.540420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.540447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.540582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.540620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.540753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.540793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.540895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.540922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.558 qpair failed and we were unable to recover it. 00:28:01.558 [2024-12-06 19:26:46.541047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.558 [2024-12-06 19:26:46.541074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.541237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.541263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.541373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.541400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.541548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.541575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.541711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.541748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.541844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.541871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.542014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.542041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.542179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.542205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.542361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.542387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.542575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.542602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.542695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.542728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.542846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.542873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.542975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.543002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.543119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.543146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.543286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.543312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.543446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.543473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.543623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.543651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.543779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.543806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.543936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.543963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.544146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.544174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.544304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.544330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.544426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.544452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.544578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.544606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.544725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.544753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.544858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.544889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.545020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.545046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.545224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.545251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.545386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.545413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.545552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.545578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.545742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.545769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.545867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.545894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.546038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.546065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.546161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.546188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.546362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.546388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.546518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.546545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.546767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.546825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.546931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.546958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.547134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.547161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.547296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.547323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.547471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.547509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.547643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.547670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.547827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.547854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.547957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.547984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.548115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.548141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.548299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.548327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.548467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.548494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.548589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.548616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.548739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.548766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.548918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.548946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.549086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.549113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.549285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.549312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.549479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.549532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.549666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.549694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.549880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.549907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.550059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.550088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.550231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.550258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.550388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.550415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.550586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.550614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.550714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.550773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.550880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.550907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.551081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.551108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.551240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.551275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.551422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.551449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.551635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.551670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.551807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.551843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.551941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.551969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.552050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.552076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.552200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.552227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.552383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.552409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.552560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.552586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.552715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.552749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.552845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.552871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.559 qpair failed and we were unable to recover it. 00:28:01.559 [2024-12-06 19:26:46.552967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.559 [2024-12-06 19:26:46.552994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.553118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.553144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.553289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.553316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.553409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.553436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.553561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.553588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.553765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.553792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.553931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.553958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.554096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.554123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.554280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.554306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.554438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.554465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.554660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.554697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.554792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.554819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.554934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.554977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.555120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.555147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.555361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.555388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.555556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.555583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.555718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.555751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.555886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.555913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.556007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.556043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.556142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.556173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.556298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.556325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.556462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.556490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.556650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.556677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.556839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.556866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.556956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.556983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.557133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.557160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.557339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.557366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5930000b90 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.557548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.557576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.557755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.557782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.557917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.557944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.558149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.558175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.558347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.558373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.558499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.558526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.558633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.558660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.558808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.558835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.558962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.558992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.559135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.559161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.559282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.559312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.559404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.559439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.559599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.559625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.559823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.559850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.559960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.559986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.560174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.560201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.560351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.560378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.560512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.560538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.560633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.560658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.560792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.560824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.560970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.560997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.561125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.561154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.561293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.561324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.561472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.561501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.561663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.561691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.561807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.561834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.561923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.561949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.562142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.562169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.562294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.562321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.562448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.562474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.562643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.562670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.562764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.562792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.562907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.562933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.563071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.563098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.563194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.563221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.563355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.563382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.563514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.563540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.563711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.563746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.563868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.563895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.564021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.564047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.564170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.564196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.560 [2024-12-06 19:26:46.564328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.560 [2024-12-06 19:26:46.564355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.560 qpair failed and we were unable to recover it. 00:28:01.561 [2024-12-06 19:26:46.564525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.561 [2024-12-06 19:26:46.564552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.561 qpair failed and we were unable to recover it. 00:28:01.561 [2024-12-06 19:26:46.564704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.561 [2024-12-06 19:26:46.564740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.561 qpair failed and we were unable to recover it. 00:28:01.561 [2024-12-06 19:26:46.564888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.561 [2024-12-06 19:26:46.564914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.561 qpair failed and we were unable to recover it. 00:28:01.561 [2024-12-06 19:26:46.565050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.561 [2024-12-06 19:26:46.565076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.561 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.565232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.565263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.565417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.565443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.565567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.565594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.565768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.565804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.565936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.565973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.566118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.566145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.566281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.566308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.566514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.566541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.566674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.566701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.566849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.566876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.567012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.567038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.567223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.567250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.567438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.567466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.567593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.567619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.567772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.567799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.567928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.567954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.568096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.568123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.568219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.568246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.568374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.568400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.568535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.568562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.568741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.568768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.568936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.568963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.569133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.569160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.569293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.569320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.569448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.569474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.569625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.569653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.569766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.569794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.569884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.569910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.570060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.570086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.823 qpair failed and we were unable to recover it. 00:28:01.823 [2024-12-06 19:26:46.570196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.823 [2024-12-06 19:26:46.570223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.570365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.570394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.570523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.570549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.570675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.570702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.570870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.570910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.571045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.571071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.571209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.571235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.571430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.571461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.571551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.571577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.571754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.571782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.571890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.571917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.572104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.572131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.572261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.572287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.572458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.572485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.572616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.572642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.572832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.572859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.572998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.573025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.573160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.573187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.573330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.573356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.573523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.573549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.573737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.573770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.573897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.573924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.574050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.574076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.574246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.574272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.574404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.574430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 Malloc0 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.574575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.574604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.574752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.574780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.824 [2024-12-06 19:26:46.574974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.575002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.575104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.575131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.824 [2024-12-06 19:26:46.575310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.575338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.575510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.575539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.575682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.575708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.575908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.575934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.576099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.576126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.576260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.576286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.576413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.576439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-12-06 19:26:46.576541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-12-06 19:26:46.576575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.576663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.576696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.576846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.576873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.577035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.577062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.577273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.577299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.577421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.577449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.577581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.577608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.577736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.577763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.577940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.577967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.578101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.578127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.578145] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.825 [2024-12-06 19:26:46.578246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.578271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.578395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.578420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.578542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.578568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.578710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.578743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.578901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.578928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.579103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.579129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.579291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.579317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.579483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.579510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.579671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.579698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.579868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.579916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.580056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.580090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.580251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.580278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.580470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.580497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.580656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.580684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.580789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.580815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5938000b90 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.580983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.581022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.581209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.581236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.581345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.581375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.581513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.581540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.581704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.581740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.581875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.581904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.582041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.582068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.582207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.582233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.582419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.582448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.582615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.582644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.582805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.582842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.582959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.582986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.583118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.583147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.583346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-12-06 19:26:46.583372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-12-06 19:26:46.583492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.583518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.583645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.583674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.583783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.583809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.584016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.584042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.584202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.584229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.584363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.584389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.584497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.584534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.584701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.584736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.584870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.584896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.585033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.585060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.585178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.585207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.585338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.585364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.585502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.585528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.585684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.585711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.585818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.585845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.586005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.586031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.586135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.586166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.586298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.586328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.826 [2024-12-06 19:26:46.586453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.586480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:01.826 [2024-12-06 19:26:46.586629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.826 [2024-12-06 19:26:46.586658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.586755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.826 [2024-12-06 19:26:46.586781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.586932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.586959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.587119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.587145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.587288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.587315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.587428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.587464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.587597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.587623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.587776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.587804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.587906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.587930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.588081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.588107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.588255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.588282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.588447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.588474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.588606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.588632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.588797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.588824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.589021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.589048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.589222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.589249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.589407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.589434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-12-06 19:26:46.589535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-12-06 19:26:46.589562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.589710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.589751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.589920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.589947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.590043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.590070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.590246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.590272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.590469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.590500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.590630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.590659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.590852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.590882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.591017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.591043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.591242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.591269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.591445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.591473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.591603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.591634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.591763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.591790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.591887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.591914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.592051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.592077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.592249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.592276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.592402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.592429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.592621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.592648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.592829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.592856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.593025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.593052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.593215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.593242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.593382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.593411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.593558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.593587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.593750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.593780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.593961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.593988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.594156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.594183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.594292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.594318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.827 [2024-12-06 19:26:46.594468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.594494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:01.827 [2024-12-06 19:26:46.594657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.594685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.827 [2024-12-06 19:26:46.594821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.827 [2024-12-06 19:26:46.594850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-12-06 19:26:46.594983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-12-06 19:26:46.595009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.595209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.595236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.595406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.595432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.595612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.595638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.595814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.595842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.595944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.595970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.596122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.596148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.596311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.596337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.596469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.596496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.596615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.596641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.596772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.596800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.596885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.596911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.597037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.597063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.597184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.597211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.597356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.597386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.597527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.597554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.597699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.597732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.597927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.597954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.598083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.598110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.598255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.598281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.598450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.598477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.598619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.598645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.598798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.598825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.598959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.598986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.599116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.599142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.599305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.599332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.599459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.599486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.599684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.599710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.599865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.599891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.600054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.600080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.600202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.600227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.600389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.600416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.600571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.600598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.600729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.600755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.600851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.600877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.601040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.601067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.601203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.601231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.601389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.601415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-12-06 19:26:46.601582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-12-06 19:26:46.601608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.601783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.601813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.601994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.602021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.602143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.602175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.602306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.602335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.602423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.602449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.602584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.602611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.602731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.602759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.829 [2024-12-06 19:26:46.602899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.602940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.829 [2024-12-06 19:26:46.603073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.603100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.603188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.603214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.603315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.603353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.603499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.603526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.603685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.603712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.603846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.603872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.604005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.604031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.604175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.604201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.604351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.604377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.604562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.604590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.604712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.604747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.604881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.604907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.605073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.605100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.605245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.605274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.605450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.605476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.605635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.605662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.605789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.605817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.605956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.605983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.606175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-12-06 19:26:46.606204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc5d0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.606717] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.829 [2024-12-06 19:26:46.609013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.829 [2024-12-06 19:26:46.609150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.829 [2024-12-06 19:26:46.609179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.829 [2024-12-06 19:26:46.609196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.829 [2024-12-06 19:26:46.609210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.829 [2024-12-06 19:26:46.609247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.829 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:01.829 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.829 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.829 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.829 19:26:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 330068 00:28:01.829 [2024-12-06 19:26:46.618850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.829 [2024-12-06 19:26:46.618948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.829 [2024-12-06 19:26:46.618974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.829 [2024-12-06 19:26:46.618989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.829 [2024-12-06 19:26:46.619001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.829 [2024-12-06 19:26:46.619032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-12-06 19:26:46.628918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.829 [2024-12-06 19:26:46.629062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.830 [2024-12-06 19:26:46.629087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.830 [2024-12-06 19:26:46.629102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.830 [2024-12-06 19:26:46.629114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.830 [2024-12-06 19:26:46.629143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-12-06 19:26:46.638882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.830 [2024-12-06 19:26:46.638982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.830 [2024-12-06 19:26:46.639021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.830 [2024-12-06 19:26:46.639042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.830 [2024-12-06 19:26:46.639056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.830 [2024-12-06 19:26:46.639086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-12-06 19:26:46.648818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.830 [2024-12-06 19:26:46.648916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.830 [2024-12-06 19:26:46.648947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.830 [2024-12-06 19:26:46.648962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.830 [2024-12-06 19:26:46.648975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.830 [2024-12-06 19:26:46.649019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-12-06 19:26:46.658877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.830 [2024-12-06 19:26:46.658975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.830 [2024-12-06 19:26:46.659001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.830 [2024-12-06 19:26:46.659015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.830 [2024-12-06 19:26:46.659028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.830 [2024-12-06 19:26:46.659072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-12-06 19:26:46.668903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.830 [2024-12-06 19:26:46.668998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.830 [2024-12-06 19:26:46.669023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.830 [2024-12-06 19:26:46.669038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.830 [2024-12-06 19:26:46.669050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.830 [2024-12-06 19:26:46.669094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-12-06 19:26:46.678963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.830 [2024-12-06 19:26:46.679076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.830 [2024-12-06 19:26:46.679100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.830 [2024-12-06 19:26:46.679114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.830 [2024-12-06 19:26:46.679127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.830 [2024-12-06 19:26:46.679157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-12-06 19:26:46.688934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.830 [2024-12-06 19:26:46.689036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.830 [2024-12-06 19:26:46.689060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.830 [2024-12-06 19:26:46.689074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.830 [2024-12-06 19:26:46.689087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.830 [2024-12-06 19:26:46.689116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-12-06 19:26:46.698987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.830 [2024-12-06 19:26:46.699106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.830 [2024-12-06 19:26:46.699131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.830 [2024-12-06 19:26:46.699146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.830 [2024-12-06 19:26:46.699158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.830 [2024-12-06 19:26:46.699187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-12-06 19:26:46.709002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.830 [2024-12-06 19:26:46.709103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.830 [2024-12-06 19:26:46.709127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.830 [2024-12-06 19:26:46.709141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.830 [2024-12-06 19:26:46.709154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.830 [2024-12-06 19:26:46.709183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-12-06 19:26:46.719023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.830 [2024-12-06 19:26:46.719157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.830 [2024-12-06 19:26:46.719181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.830 [2024-12-06 19:26:46.719196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.830 [2024-12-06 19:26:46.719208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.830 [2024-12-06 19:26:46.719236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-12-06 19:26:46.729007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.830 [2024-12-06 19:26:46.729114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.830 [2024-12-06 19:26:46.729139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.830 [2024-12-06 19:26:46.729153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.830 [2024-12-06 19:26:46.729166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.830 [2024-12-06 19:26:46.729194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-12-06 19:26:46.739024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.830 [2024-12-06 19:26:46.739130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.830 [2024-12-06 19:26:46.739153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.830 [2024-12-06 19:26:46.739167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.830 [2024-12-06 19:26:46.739180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.830 [2024-12-06 19:26:46.739208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-12-06 19:26:46.749097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.830 [2024-12-06 19:26:46.749181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.830 [2024-12-06 19:26:46.749207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.830 [2024-12-06 19:26:46.749222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.830 [2024-12-06 19:26:46.749234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.830 [2024-12-06 19:26:46.749263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-12-06 19:26:46.759144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.830 [2024-12-06 19:26:46.759237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.830 [2024-12-06 19:26:46.759260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.831 [2024-12-06 19:26:46.759274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.831 [2024-12-06 19:26:46.759287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.831 [2024-12-06 19:26:46.759316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-12-06 19:26:46.769125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.831 [2024-12-06 19:26:46.769217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.831 [2024-12-06 19:26:46.769241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.831 [2024-12-06 19:26:46.769261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.831 [2024-12-06 19:26:46.769275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.831 [2024-12-06 19:26:46.769303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-12-06 19:26:46.779177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.831 [2024-12-06 19:26:46.779273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.831 [2024-12-06 19:26:46.779297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.831 [2024-12-06 19:26:46.779311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.831 [2024-12-06 19:26:46.779324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.831 [2024-12-06 19:26:46.779352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-12-06 19:26:46.789193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.831 [2024-12-06 19:26:46.789287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.831 [2024-12-06 19:26:46.789312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.831 [2024-12-06 19:26:46.789326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.831 [2024-12-06 19:26:46.789339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.831 [2024-12-06 19:26:46.789368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-12-06 19:26:46.799260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.831 [2024-12-06 19:26:46.799354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.831 [2024-12-06 19:26:46.799378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.831 [2024-12-06 19:26:46.799394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.831 [2024-12-06 19:26:46.799406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.831 [2024-12-06 19:26:46.799435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-12-06 19:26:46.809247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.831 [2024-12-06 19:26:46.809335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.831 [2024-12-06 19:26:46.809359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.831 [2024-12-06 19:26:46.809374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.831 [2024-12-06 19:26:46.809386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.831 [2024-12-06 19:26:46.809416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-12-06 19:26:46.819326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.831 [2024-12-06 19:26:46.819450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.831 [2024-12-06 19:26:46.819474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.831 [2024-12-06 19:26:46.819488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.831 [2024-12-06 19:26:46.819500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.831 [2024-12-06 19:26:46.819529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-12-06 19:26:46.829303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.831 [2024-12-06 19:26:46.829382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.831 [2024-12-06 19:26:46.829407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.831 [2024-12-06 19:26:46.829422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.831 [2024-12-06 19:26:46.829434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.831 [2024-12-06 19:26:46.829463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-12-06 19:26:46.839359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.831 [2024-12-06 19:26:46.839481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.831 [2024-12-06 19:26:46.839506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.831 [2024-12-06 19:26:46.839521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.831 [2024-12-06 19:26:46.839533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.831 [2024-12-06 19:26:46.839562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-12-06 19:26:46.849351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.831 [2024-12-06 19:26:46.849467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.831 [2024-12-06 19:26:46.849493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.831 [2024-12-06 19:26:46.849508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.831 [2024-12-06 19:26:46.849521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.831 [2024-12-06 19:26:46.849549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-12-06 19:26:46.859405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.831 [2024-12-06 19:26:46.859497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.831 [2024-12-06 19:26:46.859521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.831 [2024-12-06 19:26:46.859536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.831 [2024-12-06 19:26:46.859548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:01.831 [2024-12-06 19:26:46.859577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.831 qpair failed and we were unable to recover it. 00:28:02.090 [2024-12-06 19:26:46.869447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.090 [2024-12-06 19:26:46.869577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.090 [2024-12-06 19:26:46.869602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.090 [2024-12-06 19:26:46.869617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.090 [2024-12-06 19:26:46.869629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.090 [2024-12-06 19:26:46.869658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.090 qpair failed and we were unable to recover it. 00:28:02.090 [2024-12-06 19:26:46.879467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.090 [2024-12-06 19:26:46.879555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.090 [2024-12-06 19:26:46.879579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.090 [2024-12-06 19:26:46.879593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.090 [2024-12-06 19:26:46.879606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.090 [2024-12-06 19:26:46.879634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.090 qpair failed and we were unable to recover it. 00:28:02.090 [2024-12-06 19:26:46.889471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.090 [2024-12-06 19:26:46.889564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.090 [2024-12-06 19:26:46.889590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.090 [2024-12-06 19:26:46.889605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.090 [2024-12-06 19:26:46.889617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.090 [2024-12-06 19:26:46.889646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.090 qpair failed and we were unable to recover it. 00:28:02.090 [2024-12-06 19:26:46.899517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.090 [2024-12-06 19:26:46.899617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.090 [2024-12-06 19:26:46.899641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.090 [2024-12-06 19:26:46.899661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.090 [2024-12-06 19:26:46.899674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.090 [2024-12-06 19:26:46.899717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.090 qpair failed and we were unable to recover it. 00:28:02.090 [2024-12-06 19:26:46.909523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.090 [2024-12-06 19:26:46.909614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.090 [2024-12-06 19:26:46.909639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.090 [2024-12-06 19:26:46.909654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.090 [2024-12-06 19:26:46.909666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.090 [2024-12-06 19:26:46.909695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.090 qpair failed and we were unable to recover it. 00:28:02.090 [2024-12-06 19:26:46.919598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.090 [2024-12-06 19:26:46.919693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.090 [2024-12-06 19:26:46.919742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.090 [2024-12-06 19:26:46.919758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.090 [2024-12-06 19:26:46.919771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.090 [2024-12-06 19:26:46.919801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.090 qpair failed and we were unable to recover it. 00:28:02.091 [2024-12-06 19:26:46.929591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.091 [2024-12-06 19:26:46.929682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.091 [2024-12-06 19:26:46.929730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.091 [2024-12-06 19:26:46.929748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.091 [2024-12-06 19:26:46.929761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.091 [2024-12-06 19:26:46.929791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.091 qpair failed and we were unable to recover it. 00:28:02.091 [2024-12-06 19:26:46.939631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.091 [2024-12-06 19:26:46.939854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.091 [2024-12-06 19:26:46.939880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.091 [2024-12-06 19:26:46.939896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.091 [2024-12-06 19:26:46.939909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.091 [2024-12-06 19:26:46.939940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.091 qpair failed and we were unable to recover it. 00:28:02.091 [2024-12-06 19:26:46.949650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.091 [2024-12-06 19:26:46.949774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.091 [2024-12-06 19:26:46.949799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.091 [2024-12-06 19:26:46.949814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.091 [2024-12-06 19:26:46.949827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.091 [2024-12-06 19:26:46.949858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.091 qpair failed and we were unable to recover it. 00:28:02.091 [2024-12-06 19:26:46.959687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.091 [2024-12-06 19:26:46.959805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.091 [2024-12-06 19:26:46.959830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.091 [2024-12-06 19:26:46.959845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.091 [2024-12-06 19:26:46.959858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.091 [2024-12-06 19:26:46.959889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.091 qpair failed and we were unable to recover it. 00:28:02.091 [2024-12-06 19:26:46.969715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.091 [2024-12-06 19:26:46.969813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.091 [2024-12-06 19:26:46.969840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.091 [2024-12-06 19:26:46.969856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.091 [2024-12-06 19:26:46.969868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.091 [2024-12-06 19:26:46.969898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.091 qpair failed and we were unable to recover it. 00:28:02.091 [2024-12-06 19:26:46.979767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.091 [2024-12-06 19:26:46.979862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.091 [2024-12-06 19:26:46.979887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.091 [2024-12-06 19:26:46.979902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.091 [2024-12-06 19:26:46.979914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.091 [2024-12-06 19:26:46.979944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.091 qpair failed and we were unable to recover it. 00:28:02.091 [2024-12-06 19:26:46.989784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.091 [2024-12-06 19:26:46.989874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.091 [2024-12-06 19:26:46.989899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.091 [2024-12-06 19:26:46.989913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.091 [2024-12-06 19:26:46.989927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.091 [2024-12-06 19:26:46.989957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.091 qpair failed and we were unable to recover it. 00:28:02.091 [2024-12-06 19:26:46.999791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.091 [2024-12-06 19:26:46.999901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.091 [2024-12-06 19:26:46.999926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.091 [2024-12-06 19:26:46.999941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.091 [2024-12-06 19:26:46.999954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.091 [2024-12-06 19:26:46.999984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.091 qpair failed and we were unable to recover it. 00:28:02.091 [2024-12-06 19:26:47.009845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.091 [2024-12-06 19:26:47.009989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.091 [2024-12-06 19:26:47.010016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.091 [2024-12-06 19:26:47.010031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.091 [2024-12-06 19:26:47.010045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.091 [2024-12-06 19:26:47.010090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.091 qpair failed and we were unable to recover it. 00:28:02.091 [2024-12-06 19:26:47.019849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.091 [2024-12-06 19:26:47.019940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.091 [2024-12-06 19:26:47.019965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.091 [2024-12-06 19:26:47.019980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.091 [2024-12-06 19:26:47.019994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.091 [2024-12-06 19:26:47.020023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.091 qpair failed and we were unable to recover it. 00:28:02.091 [2024-12-06 19:26:47.029888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.091 [2024-12-06 19:26:47.029977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.091 [2024-12-06 19:26:47.030001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.091 [2024-12-06 19:26:47.030036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.091 [2024-12-06 19:26:47.030050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.091 [2024-12-06 19:26:47.030079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.091 qpair failed and we were unable to recover it. 00:28:02.091 [2024-12-06 19:26:47.039938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.091 [2024-12-06 19:26:47.040031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.091 [2024-12-06 19:26:47.040071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.091 [2024-12-06 19:26:47.040085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.091 [2024-12-06 19:26:47.040098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.091 [2024-12-06 19:26:47.040127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.091 qpair failed and we were unable to recover it. 00:28:02.091 [2024-12-06 19:26:47.049997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.091 [2024-12-06 19:26:47.050129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.091 [2024-12-06 19:26:47.050153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.091 [2024-12-06 19:26:47.050167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.091 [2024-12-06 19:26:47.050179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.092 [2024-12-06 19:26:47.050223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.092 qpair failed and we were unable to recover it. 00:28:02.092 [2024-12-06 19:26:47.059933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.092 [2024-12-06 19:26:47.060032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.092 [2024-12-06 19:26:47.060058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.092 [2024-12-06 19:26:47.060073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.092 [2024-12-06 19:26:47.060085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.092 [2024-12-06 19:26:47.060114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.092 qpair failed and we were unable to recover it. 00:28:02.092 [2024-12-06 19:26:47.069951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.092 [2024-12-06 19:26:47.070053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.092 [2024-12-06 19:26:47.070078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.092 [2024-12-06 19:26:47.070093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.092 [2024-12-06 19:26:47.070105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.092 [2024-12-06 19:26:47.070139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.092 qpair failed and we were unable to recover it. 00:28:02.092 [2024-12-06 19:26:47.080093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.092 [2024-12-06 19:26:47.080188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.092 [2024-12-06 19:26:47.080212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.092 [2024-12-06 19:26:47.080227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.092 [2024-12-06 19:26:47.080239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.092 [2024-12-06 19:26:47.080268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.092 qpair failed and we were unable to recover it. 00:28:02.092 [2024-12-06 19:26:47.090078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.092 [2024-12-06 19:26:47.090181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.092 [2024-12-06 19:26:47.090206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.092 [2024-12-06 19:26:47.090221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.092 [2024-12-06 19:26:47.090234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.092 [2024-12-06 19:26:47.090262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.092 qpair failed and we were unable to recover it. 00:28:02.092 [2024-12-06 19:26:47.100102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.092 [2024-12-06 19:26:47.100206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.092 [2024-12-06 19:26:47.100231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.092 [2024-12-06 19:26:47.100245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.092 [2024-12-06 19:26:47.100257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.092 [2024-12-06 19:26:47.100287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.092 qpair failed and we were unable to recover it. 00:28:02.092 [2024-12-06 19:26:47.110120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.092 [2024-12-06 19:26:47.110206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.092 [2024-12-06 19:26:47.110230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.092 [2024-12-06 19:26:47.110245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.092 [2024-12-06 19:26:47.110257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.092 [2024-12-06 19:26:47.110286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.092 qpair failed and we were unable to recover it. 00:28:02.092 [2024-12-06 19:26:47.120150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.092 [2024-12-06 19:26:47.120246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.092 [2024-12-06 19:26:47.120271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.092 [2024-12-06 19:26:47.120285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.092 [2024-12-06 19:26:47.120297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.092 [2024-12-06 19:26:47.120325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.092 qpair failed and we were unable to recover it. 00:28:02.092 [2024-12-06 19:26:47.130128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.092 [2024-12-06 19:26:47.130213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.092 [2024-12-06 19:26:47.130237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.092 [2024-12-06 19:26:47.130251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.092 [2024-12-06 19:26:47.130263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.092 [2024-12-06 19:26:47.130293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.092 qpair failed and we were unable to recover it. 00:28:02.350 [2024-12-06 19:26:47.140201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.350 [2024-12-06 19:26:47.140295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.350 [2024-12-06 19:26:47.140319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.350 [2024-12-06 19:26:47.140334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.350 [2024-12-06 19:26:47.140347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.350 [2024-12-06 19:26:47.140375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.350 qpair failed and we were unable to recover it. 00:28:02.350 [2024-12-06 19:26:47.150203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.350 [2024-12-06 19:26:47.150321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.350 [2024-12-06 19:26:47.150346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.350 [2024-12-06 19:26:47.150361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.350 [2024-12-06 19:26:47.150374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.351 [2024-12-06 19:26:47.150402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.351 qpair failed and we were unable to recover it. 00:28:02.351 [2024-12-06 19:26:47.160223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.351 [2024-12-06 19:26:47.160317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.351 [2024-12-06 19:26:47.160341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.351 [2024-12-06 19:26:47.160361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.351 [2024-12-06 19:26:47.160375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.351 [2024-12-06 19:26:47.160409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.351 qpair failed and we were unable to recover it. 00:28:02.351 [2024-12-06 19:26:47.170355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.351 [2024-12-06 19:26:47.170445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.351 [2024-12-06 19:26:47.170470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.351 [2024-12-06 19:26:47.170484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.351 [2024-12-06 19:26:47.170496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.351 [2024-12-06 19:26:47.170524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.351 qpair failed and we were unable to recover it. 00:28:02.351 [2024-12-06 19:26:47.180334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.351 [2024-12-06 19:26:47.180429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.351 [2024-12-06 19:26:47.180453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.351 [2024-12-06 19:26:47.180467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.351 [2024-12-06 19:26:47.180480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.351 [2024-12-06 19:26:47.180509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.351 qpair failed and we were unable to recover it. 00:28:02.351 [2024-12-06 19:26:47.190344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.351 [2024-12-06 19:26:47.190431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.351 [2024-12-06 19:26:47.190455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.351 [2024-12-06 19:26:47.190469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.351 [2024-12-06 19:26:47.190481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.351 [2024-12-06 19:26:47.190509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.351 qpair failed and we were unable to recover it. 00:28:02.351 [2024-12-06 19:26:47.200359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.351 [2024-12-06 19:26:47.200453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.351 [2024-12-06 19:26:47.200477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.351 [2024-12-06 19:26:47.200491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.351 [2024-12-06 19:26:47.200504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.351 [2024-12-06 19:26:47.200541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.351 qpair failed and we were unable to recover it. 00:28:02.351 [2024-12-06 19:26:47.210351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.351 [2024-12-06 19:26:47.210445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.351 [2024-12-06 19:26:47.210470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.351 [2024-12-06 19:26:47.210485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.351 [2024-12-06 19:26:47.210497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.351 [2024-12-06 19:26:47.210526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.351 qpair failed and we were unable to recover it. 00:28:02.351 [2024-12-06 19:26:47.220413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.351 [2024-12-06 19:26:47.220530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.351 [2024-12-06 19:26:47.220556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.351 [2024-12-06 19:26:47.220571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.351 [2024-12-06 19:26:47.220583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.351 [2024-12-06 19:26:47.220612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.351 qpair failed and we were unable to recover it. 00:28:02.351 [2024-12-06 19:26:47.230500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.351 [2024-12-06 19:26:47.230587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.351 [2024-12-06 19:26:47.230611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.351 [2024-12-06 19:26:47.230626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.351 [2024-12-06 19:26:47.230638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.351 [2024-12-06 19:26:47.230667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.351 qpair failed and we were unable to recover it. 00:28:02.351 [2024-12-06 19:26:47.240472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.351 [2024-12-06 19:26:47.240581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.351 [2024-12-06 19:26:47.240604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.351 [2024-12-06 19:26:47.240619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.351 [2024-12-06 19:26:47.240631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.351 [2024-12-06 19:26:47.240660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.351 qpair failed and we were unable to recover it. 00:28:02.351 [2024-12-06 19:26:47.250491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.351 [2024-12-06 19:26:47.250588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.351 [2024-12-06 19:26:47.250614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.351 [2024-12-06 19:26:47.250628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.351 [2024-12-06 19:26:47.250642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.351 [2024-12-06 19:26:47.250670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.351 qpair failed and we were unable to recover it. 00:28:02.351 [2024-12-06 19:26:47.260545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.351 [2024-12-06 19:26:47.260634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.351 [2024-12-06 19:26:47.260658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.351 [2024-12-06 19:26:47.260672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.351 [2024-12-06 19:26:47.260685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.351 [2024-12-06 19:26:47.260738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.351 qpair failed and we were unable to recover it. 00:28:02.351 [2024-12-06 19:26:47.270537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.351 [2024-12-06 19:26:47.270619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.351 [2024-12-06 19:26:47.270643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.351 [2024-12-06 19:26:47.270658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.351 [2024-12-06 19:26:47.270670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.351 [2024-12-06 19:26:47.270714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.351 qpair failed and we were unable to recover it. 00:28:02.351 [2024-12-06 19:26:47.280577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.351 [2024-12-06 19:26:47.280667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.351 [2024-12-06 19:26:47.280690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.351 [2024-12-06 19:26:47.280726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.351 [2024-12-06 19:26:47.280742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.352 [2024-12-06 19:26:47.280772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.352 qpair failed and we were unable to recover it. 00:28:02.352 [2024-12-06 19:26:47.290759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.352 [2024-12-06 19:26:47.290857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.352 [2024-12-06 19:26:47.290881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.352 [2024-12-06 19:26:47.290902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.352 [2024-12-06 19:26:47.290916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.352 [2024-12-06 19:26:47.290946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.352 qpair failed and we were unable to recover it. 00:28:02.352 [2024-12-06 19:26:47.300652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.352 [2024-12-06 19:26:47.300758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.352 [2024-12-06 19:26:47.300783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.352 [2024-12-06 19:26:47.300799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.352 [2024-12-06 19:26:47.300811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.352 [2024-12-06 19:26:47.300840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.352 qpair failed and we were unable to recover it. 00:28:02.352 [2024-12-06 19:26:47.310879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.352 [2024-12-06 19:26:47.311020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.352 [2024-12-06 19:26:47.311063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.352 [2024-12-06 19:26:47.311078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.352 [2024-12-06 19:26:47.311091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.352 [2024-12-06 19:26:47.311121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.352 qpair failed and we were unable to recover it. 00:28:02.352 [2024-12-06 19:26:47.320770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.352 [2024-12-06 19:26:47.320903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.352 [2024-12-06 19:26:47.320929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.352 [2024-12-06 19:26:47.320945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.352 [2024-12-06 19:26:47.320957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.352 [2024-12-06 19:26:47.320987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.352 qpair failed and we were unable to recover it. 00:28:02.352 [2024-12-06 19:26:47.330817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.352 [2024-12-06 19:26:47.330904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.352 [2024-12-06 19:26:47.330928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.352 [2024-12-06 19:26:47.330943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.352 [2024-12-06 19:26:47.330956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.352 [2024-12-06 19:26:47.330991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.352 qpair failed and we were unable to recover it. 00:28:02.352 [2024-12-06 19:26:47.340809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.352 [2024-12-06 19:26:47.340898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.352 [2024-12-06 19:26:47.340923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.352 [2024-12-06 19:26:47.340938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.352 [2024-12-06 19:26:47.340951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.352 [2024-12-06 19:26:47.340981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.352 qpair failed and we were unable to recover it. 00:28:02.352 [2024-12-06 19:26:47.350769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.352 [2024-12-06 19:26:47.350860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.352 [2024-12-06 19:26:47.350884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.352 [2024-12-06 19:26:47.350899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.352 [2024-12-06 19:26:47.350912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.352 [2024-12-06 19:26:47.350942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.352 qpair failed and we were unable to recover it. 00:28:02.352 [2024-12-06 19:26:47.360841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.352 [2024-12-06 19:26:47.360943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.352 [2024-12-06 19:26:47.360970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.352 [2024-12-06 19:26:47.360985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.352 [2024-12-06 19:26:47.360998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.352 [2024-12-06 19:26:47.361042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.352 qpair failed and we were unable to recover it. 00:28:02.352 [2024-12-06 19:26:47.370884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.352 [2024-12-06 19:26:47.371008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.352 [2024-12-06 19:26:47.371032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.352 [2024-12-06 19:26:47.371047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.352 [2024-12-06 19:26:47.371074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.352 [2024-12-06 19:26:47.371104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.352 qpair failed and we were unable to recover it. 00:28:02.352 [2024-12-06 19:26:47.380991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.352 [2024-12-06 19:26:47.381097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.352 [2024-12-06 19:26:47.381122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.352 [2024-12-06 19:26:47.381136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.352 [2024-12-06 19:26:47.381148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.352 [2024-12-06 19:26:47.381177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.352 qpair failed and we were unable to recover it. 00:28:02.352 [2024-12-06 19:26:47.390959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.352 [2024-12-06 19:26:47.391056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.352 [2024-12-06 19:26:47.391080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.352 [2024-12-06 19:26:47.391095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.352 [2024-12-06 19:26:47.391107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.352 [2024-12-06 19:26:47.391136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.352 qpair failed and we were unable to recover it. 00:28:02.611 [2024-12-06 19:26:47.400996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.611 [2024-12-06 19:26:47.401128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.611 [2024-12-06 19:26:47.401152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.611 [2024-12-06 19:26:47.401167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.611 [2024-12-06 19:26:47.401180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.611 [2024-12-06 19:26:47.401209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.611 qpair failed and we were unable to recover it. 00:28:02.611 [2024-12-06 19:26:47.411021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.611 [2024-12-06 19:26:47.411111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.611 [2024-12-06 19:26:47.411134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.611 [2024-12-06 19:26:47.411149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.611 [2024-12-06 19:26:47.411161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.611 [2024-12-06 19:26:47.411190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.611 qpair failed and we were unable to recover it. 00:28:02.611 [2024-12-06 19:26:47.421097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.611 [2024-12-06 19:26:47.421190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.611 [2024-12-06 19:26:47.421215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.611 [2024-12-06 19:26:47.421234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.611 [2024-12-06 19:26:47.421248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.611 [2024-12-06 19:26:47.421277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.611 qpair failed and we were unable to recover it. 00:28:02.611 [2024-12-06 19:26:47.431070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.611 [2024-12-06 19:26:47.431151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.611 [2024-12-06 19:26:47.431175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.611 [2024-12-06 19:26:47.431189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.612 [2024-12-06 19:26:47.431202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.612 [2024-12-06 19:26:47.431231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.612 qpair failed and we were unable to recover it. 00:28:02.612 [2024-12-06 19:26:47.441124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.612 [2024-12-06 19:26:47.441243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.612 [2024-12-06 19:26:47.441266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.612 [2024-12-06 19:26:47.441280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.612 [2024-12-06 19:26:47.441292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.612 [2024-12-06 19:26:47.441333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.612 qpair failed and we were unable to recover it. 00:28:02.612 [2024-12-06 19:26:47.451100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.612 [2024-12-06 19:26:47.451206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.612 [2024-12-06 19:26:47.451231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.612 [2024-12-06 19:26:47.451246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.612 [2024-12-06 19:26:47.451258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.612 [2024-12-06 19:26:47.451287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.612 qpair failed and we were unable to recover it. 00:28:02.612 [2024-12-06 19:26:47.461185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.612 [2024-12-06 19:26:47.461313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.612 [2024-12-06 19:26:47.461339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.612 [2024-12-06 19:26:47.461354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.612 [2024-12-06 19:26:47.461366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.612 [2024-12-06 19:26:47.461400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.612 qpair failed and we were unable to recover it. 00:28:02.612 [2024-12-06 19:26:47.471151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.612 [2024-12-06 19:26:47.471244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.612 [2024-12-06 19:26:47.471268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.612 [2024-12-06 19:26:47.471282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.612 [2024-12-06 19:26:47.471294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.612 [2024-12-06 19:26:47.471323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.612 qpair failed and we were unable to recover it. 00:28:02.612 [2024-12-06 19:26:47.481270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.612 [2024-12-06 19:26:47.481397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.612 [2024-12-06 19:26:47.481422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.612 [2024-12-06 19:26:47.481438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.612 [2024-12-06 19:26:47.481450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.612 [2024-12-06 19:26:47.481479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.612 qpair failed and we were unable to recover it. 00:28:02.612 [2024-12-06 19:26:47.491261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.612 [2024-12-06 19:26:47.491348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.612 [2024-12-06 19:26:47.491372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.612 [2024-12-06 19:26:47.491386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.612 [2024-12-06 19:26:47.491399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.612 [2024-12-06 19:26:47.491427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.612 qpair failed and we were unable to recover it. 00:28:02.612 [2024-12-06 19:26:47.501223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.612 [2024-12-06 19:26:47.501317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.612 [2024-12-06 19:26:47.501340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.612 [2024-12-06 19:26:47.501354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.612 [2024-12-06 19:26:47.501366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.612 [2024-12-06 19:26:47.501395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.612 qpair failed and we were unable to recover it. 00:28:02.612 [2024-12-06 19:26:47.511293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.612 [2024-12-06 19:26:47.511398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.612 [2024-12-06 19:26:47.511423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.612 [2024-12-06 19:26:47.511438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.612 [2024-12-06 19:26:47.511450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.612 [2024-12-06 19:26:47.511479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.612 qpair failed and we were unable to recover it. 00:28:02.612 [2024-12-06 19:26:47.521313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.612 [2024-12-06 19:26:47.521404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.612 [2024-12-06 19:26:47.521428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.612 [2024-12-06 19:26:47.521442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.612 [2024-12-06 19:26:47.521455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.612 [2024-12-06 19:26:47.521483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.612 qpair failed and we were unable to recover it. 00:28:02.612 [2024-12-06 19:26:47.531335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.612 [2024-12-06 19:26:47.531456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.612 [2024-12-06 19:26:47.531481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.612 [2024-12-06 19:26:47.531497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.612 [2024-12-06 19:26:47.531509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.612 [2024-12-06 19:26:47.531538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.612 qpair failed and we were unable to recover it. 00:28:02.612 [2024-12-06 19:26:47.541348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.612 [2024-12-06 19:26:47.541430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.612 [2024-12-06 19:26:47.541453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.612 [2024-12-06 19:26:47.541468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.612 [2024-12-06 19:26:47.541480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.612 [2024-12-06 19:26:47.541509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.612 qpair failed and we were unable to recover it. 00:28:02.612 [2024-12-06 19:26:47.551384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.612 [2024-12-06 19:26:47.551465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.612 [2024-12-06 19:26:47.551489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.612 [2024-12-06 19:26:47.551508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.612 [2024-12-06 19:26:47.551537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.612 [2024-12-06 19:26:47.551567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.612 qpair failed and we were unable to recover it. 00:28:02.612 [2024-12-06 19:26:47.561467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.612 [2024-12-06 19:26:47.561557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.612 [2024-12-06 19:26:47.561581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.612 [2024-12-06 19:26:47.561596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.612 [2024-12-06 19:26:47.561608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.613 [2024-12-06 19:26:47.561637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.613 qpair failed and we were unable to recover it. 00:28:02.613 [2024-12-06 19:26:47.571473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.613 [2024-12-06 19:26:47.571557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.613 [2024-12-06 19:26:47.571583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.613 [2024-12-06 19:26:47.571597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.613 [2024-12-06 19:26:47.571609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.613 [2024-12-06 19:26:47.571637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.613 qpair failed and we were unable to recover it. 00:28:02.613 [2024-12-06 19:26:47.581463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.613 [2024-12-06 19:26:47.581547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.613 [2024-12-06 19:26:47.581571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.613 [2024-12-06 19:26:47.581586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.613 [2024-12-06 19:26:47.581598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.613 [2024-12-06 19:26:47.581627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.613 qpair failed and we were unable to recover it. 00:28:02.613 [2024-12-06 19:26:47.591510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.613 [2024-12-06 19:26:47.591592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.613 [2024-12-06 19:26:47.591616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.613 [2024-12-06 19:26:47.591630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.613 [2024-12-06 19:26:47.591643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.613 [2024-12-06 19:26:47.591677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.613 qpair failed and we were unable to recover it. 00:28:02.613 [2024-12-06 19:26:47.601542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.613 [2024-12-06 19:26:47.601639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.613 [2024-12-06 19:26:47.601663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.613 [2024-12-06 19:26:47.601678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.613 [2024-12-06 19:26:47.601691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.613 [2024-12-06 19:26:47.601743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.613 qpair failed and we were unable to recover it. 00:28:02.613 [2024-12-06 19:26:47.611577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.613 [2024-12-06 19:26:47.611656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.613 [2024-12-06 19:26:47.611681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.613 [2024-12-06 19:26:47.611696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.613 [2024-12-06 19:26:47.611732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.613 [2024-12-06 19:26:47.611765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.613 qpair failed and we were unable to recover it. 00:28:02.613 [2024-12-06 19:26:47.621590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.613 [2024-12-06 19:26:47.621690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.613 [2024-12-06 19:26:47.621740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.613 [2024-12-06 19:26:47.621757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.613 [2024-12-06 19:26:47.621770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.613 [2024-12-06 19:26:47.621800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.613 qpair failed and we were unable to recover it. 00:28:02.613 [2024-12-06 19:26:47.631621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.613 [2024-12-06 19:26:47.631739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.613 [2024-12-06 19:26:47.631766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.613 [2024-12-06 19:26:47.631781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.613 [2024-12-06 19:26:47.631794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.613 [2024-12-06 19:26:47.631823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.613 qpair failed and we were unable to recover it. 00:28:02.613 [2024-12-06 19:26:47.641682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.613 [2024-12-06 19:26:47.641805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.613 [2024-12-06 19:26:47.641830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.613 [2024-12-06 19:26:47.641845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.613 [2024-12-06 19:26:47.641858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.613 [2024-12-06 19:26:47.641887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.613 qpair failed and we were unable to recover it. 00:28:02.613 [2024-12-06 19:26:47.651695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.613 [2024-12-06 19:26:47.651808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.613 [2024-12-06 19:26:47.651832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.613 [2024-12-06 19:26:47.651848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.613 [2024-12-06 19:26:47.651860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.613 [2024-12-06 19:26:47.651890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.613 qpair failed and we were unable to recover it. 00:28:02.872 [2024-12-06 19:26:47.661758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.872 [2024-12-06 19:26:47.661853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.872 [2024-12-06 19:26:47.661879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.872 [2024-12-06 19:26:47.661893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.872 [2024-12-06 19:26:47.661906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.872 [2024-12-06 19:26:47.661935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.872 qpair failed and we were unable to recover it. 00:28:02.872 [2024-12-06 19:26:47.671757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.872 [2024-12-06 19:26:47.671845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.872 [2024-12-06 19:26:47.671869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.872 [2024-12-06 19:26:47.671884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.872 [2024-12-06 19:26:47.671896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.872 [2024-12-06 19:26:47.671926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.872 qpair failed and we were unable to recover it. 00:28:02.872 [2024-12-06 19:26:47.681794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.872 [2024-12-06 19:26:47.681923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.872 [2024-12-06 19:26:47.681949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.872 [2024-12-06 19:26:47.681969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.872 [2024-12-06 19:26:47.681982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.872 [2024-12-06 19:26:47.682032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.872 qpair failed and we were unable to recover it. 00:28:02.872 [2024-12-06 19:26:47.691818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.873 [2024-12-06 19:26:47.691910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.873 [2024-12-06 19:26:47.691935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.873 [2024-12-06 19:26:47.691950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.873 [2024-12-06 19:26:47.691963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.873 [2024-12-06 19:26:47.691992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.873 qpair failed and we were unable to recover it. 00:28:02.873 [2024-12-06 19:26:47.701843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.873 [2024-12-06 19:26:47.701932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.873 [2024-12-06 19:26:47.701956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.873 [2024-12-06 19:26:47.701971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.873 [2024-12-06 19:26:47.701984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.873 [2024-12-06 19:26:47.702012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.873 qpair failed and we were unable to recover it. 00:28:02.873 [2024-12-06 19:26:47.711851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.873 [2024-12-06 19:26:47.711945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.873 [2024-12-06 19:26:47.711969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.873 [2024-12-06 19:26:47.711984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.873 [2024-12-06 19:26:47.712011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.873 [2024-12-06 19:26:47.712039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.873 qpair failed and we were unable to recover it. 00:28:02.873 [2024-12-06 19:26:47.721870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.873 [2024-12-06 19:26:47.722002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.873 [2024-12-06 19:26:47.722028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.873 [2024-12-06 19:26:47.722044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.873 [2024-12-06 19:26:47.722057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.873 [2024-12-06 19:26:47.722107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.873 qpair failed and we were unable to recover it. 00:28:02.873 [2024-12-06 19:26:47.731878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.873 [2024-12-06 19:26:47.731967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.873 [2024-12-06 19:26:47.731991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.873 [2024-12-06 19:26:47.732032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.873 [2024-12-06 19:26:47.732045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.873 [2024-12-06 19:26:47.732074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.873 qpair failed and we were unable to recover it. 00:28:02.873 [2024-12-06 19:26:47.741986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.873 [2024-12-06 19:26:47.742092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.873 [2024-12-06 19:26:47.742127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.873 [2024-12-06 19:26:47.742141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.873 [2024-12-06 19:26:47.742153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.873 [2024-12-06 19:26:47.742182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.873 qpair failed and we were unable to recover it. 00:28:02.873 [2024-12-06 19:26:47.751948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.873 [2024-12-06 19:26:47.752049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.873 [2024-12-06 19:26:47.752072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.873 [2024-12-06 19:26:47.752087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.873 [2024-12-06 19:26:47.752099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.873 [2024-12-06 19:26:47.752127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.873 qpair failed and we were unable to recover it. 00:28:02.873 [2024-12-06 19:26:47.762041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.873 [2024-12-06 19:26:47.762164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.873 [2024-12-06 19:26:47.762190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.873 [2024-12-06 19:26:47.762204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.873 [2024-12-06 19:26:47.762216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.873 [2024-12-06 19:26:47.762255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.873 qpair failed and we were unable to recover it. 00:28:02.873 [2024-12-06 19:26:47.772091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.873 [2024-12-06 19:26:47.772190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.873 [2024-12-06 19:26:47.772214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.873 [2024-12-06 19:26:47.772229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.873 [2024-12-06 19:26:47.772242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.873 [2024-12-06 19:26:47.772280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.873 qpair failed and we were unable to recover it. 00:28:02.873 [2024-12-06 19:26:47.782064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.873 [2024-12-06 19:26:47.782179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.873 [2024-12-06 19:26:47.782204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.873 [2024-12-06 19:26:47.782219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.873 [2024-12-06 19:26:47.782232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.873 [2024-12-06 19:26:47.782260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.873 qpair failed and we were unable to recover it. 00:28:02.873 [2024-12-06 19:26:47.792111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.873 [2024-12-06 19:26:47.792194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.873 [2024-12-06 19:26:47.792220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.873 [2024-12-06 19:26:47.792234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.873 [2024-12-06 19:26:47.792247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.873 [2024-12-06 19:26:47.792275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.873 qpair failed and we were unable to recover it. 00:28:02.873 [2024-12-06 19:26:47.802164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.873 [2024-12-06 19:26:47.802264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.873 [2024-12-06 19:26:47.802287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.873 [2024-12-06 19:26:47.802302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.873 [2024-12-06 19:26:47.802314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.873 [2024-12-06 19:26:47.802357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.873 qpair failed and we were unable to recover it. 00:28:02.873 [2024-12-06 19:26:47.812193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.874 [2024-12-06 19:26:47.812276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.874 [2024-12-06 19:26:47.812300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.874 [2024-12-06 19:26:47.812320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.874 [2024-12-06 19:26:47.812332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.874 [2024-12-06 19:26:47.812361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-12-06 19:26:47.822210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.874 [2024-12-06 19:26:47.822299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.874 [2024-12-06 19:26:47.822322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.874 [2024-12-06 19:26:47.822337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.874 [2024-12-06 19:26:47.822349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.874 [2024-12-06 19:26:47.822378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-12-06 19:26:47.832218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.874 [2024-12-06 19:26:47.832309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.874 [2024-12-06 19:26:47.832332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.874 [2024-12-06 19:26:47.832347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.874 [2024-12-06 19:26:47.832360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.874 [2024-12-06 19:26:47.832388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-12-06 19:26:47.842277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.874 [2024-12-06 19:26:47.842369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.874 [2024-12-06 19:26:47.842394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.874 [2024-12-06 19:26:47.842409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.874 [2024-12-06 19:26:47.842422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.874 [2024-12-06 19:26:47.842450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-12-06 19:26:47.852334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.874 [2024-12-06 19:26:47.852436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.874 [2024-12-06 19:26:47.852461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.874 [2024-12-06 19:26:47.852476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.874 [2024-12-06 19:26:47.852488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.874 [2024-12-06 19:26:47.852532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-12-06 19:26:47.862253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.874 [2024-12-06 19:26:47.862341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.874 [2024-12-06 19:26:47.862365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.874 [2024-12-06 19:26:47.862379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.874 [2024-12-06 19:26:47.862391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.874 [2024-12-06 19:26:47.862419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-12-06 19:26:47.872306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.874 [2024-12-06 19:26:47.872390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.874 [2024-12-06 19:26:47.872413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.874 [2024-12-06 19:26:47.872427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.874 [2024-12-06 19:26:47.872439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.874 [2024-12-06 19:26:47.872468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-12-06 19:26:47.882378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.874 [2024-12-06 19:26:47.882507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.874 [2024-12-06 19:26:47.882532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.874 [2024-12-06 19:26:47.882547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.874 [2024-12-06 19:26:47.882559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.874 [2024-12-06 19:26:47.882598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-12-06 19:26:47.892391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.874 [2024-12-06 19:26:47.892491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.874 [2024-12-06 19:26:47.892515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.874 [2024-12-06 19:26:47.892529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.874 [2024-12-06 19:26:47.892541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.874 [2024-12-06 19:26:47.892570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-12-06 19:26:47.902461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.874 [2024-12-06 19:26:47.902558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.874 [2024-12-06 19:26:47.902581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.874 [2024-12-06 19:26:47.902596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.874 [2024-12-06 19:26:47.902608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.874 [2024-12-06 19:26:47.902637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-12-06 19:26:47.912479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:02.874 [2024-12-06 19:26:47.912569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:02.874 [2024-12-06 19:26:47.912593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:02.874 [2024-12-06 19:26:47.912608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:02.874 [2024-12-06 19:26:47.912620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:02.874 [2024-12-06 19:26:47.912656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.874 qpair failed and we were unable to recover it. 00:28:03.133 [2024-12-06 19:26:47.922475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.133 [2024-12-06 19:26:47.922602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.133 [2024-12-06 19:26:47.922628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.133 [2024-12-06 19:26:47.922643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.133 [2024-12-06 19:26:47.922655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.133 [2024-12-06 19:26:47.922684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.133 qpair failed and we were unable to recover it. 00:28:03.133 [2024-12-06 19:26:47.932545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.133 [2024-12-06 19:26:47.932650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.133 [2024-12-06 19:26:47.932675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.133 [2024-12-06 19:26:47.932690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.133 [2024-12-06 19:26:47.932718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.133 [2024-12-06 19:26:47.932768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.133 qpair failed and we were unable to recover it. 00:28:03.133 [2024-12-06 19:26:47.942529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.133 [2024-12-06 19:26:47.942615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.133 [2024-12-06 19:26:47.942639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.133 [2024-12-06 19:26:47.942660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.133 [2024-12-06 19:26:47.942674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.134 [2024-12-06 19:26:47.942717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.134 qpair failed and we were unable to recover it. 00:28:03.134 [2024-12-06 19:26:47.952532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.134 [2024-12-06 19:26:47.952625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.134 [2024-12-06 19:26:47.952649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.134 [2024-12-06 19:26:47.952663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.134 [2024-12-06 19:26:47.952675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.134 [2024-12-06 19:26:47.952719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.134 qpair failed and we were unable to recover it. 00:28:03.134 [2024-12-06 19:26:47.962589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.134 [2024-12-06 19:26:47.962691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.134 [2024-12-06 19:26:47.962761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.134 [2024-12-06 19:26:47.962779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.134 [2024-12-06 19:26:47.962792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.134 [2024-12-06 19:26:47.962822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.134 qpair failed and we were unable to recover it. 00:28:03.134 [2024-12-06 19:26:47.972544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.134 [2024-12-06 19:26:47.972633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.134 [2024-12-06 19:26:47.972657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.134 [2024-12-06 19:26:47.972671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.134 [2024-12-06 19:26:47.972684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.134 [2024-12-06 19:26:47.972734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.134 qpair failed and we were unable to recover it. 00:28:03.134 [2024-12-06 19:26:47.982594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.134 [2024-12-06 19:26:47.982738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.134 [2024-12-06 19:26:47.982763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.134 [2024-12-06 19:26:47.982778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.134 [2024-12-06 19:26:47.982791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.134 [2024-12-06 19:26:47.982835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.134 qpair failed and we were unable to recover it. 00:28:03.134 [2024-12-06 19:26:47.992635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.134 [2024-12-06 19:26:47.992748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.134 [2024-12-06 19:26:47.992773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.134 [2024-12-06 19:26:47.992788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.134 [2024-12-06 19:26:47.992800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.134 [2024-12-06 19:26:47.992829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.134 qpair failed and we were unable to recover it. 00:28:03.134 [2024-12-06 19:26:48.002661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.134 [2024-12-06 19:26:48.002826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.134 [2024-12-06 19:26:48.002852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.134 [2024-12-06 19:26:48.002868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.134 [2024-12-06 19:26:48.002881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.134 [2024-12-06 19:26:48.002920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.134 qpair failed and we were unable to recover it. 00:28:03.134 [2024-12-06 19:26:48.012717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.134 [2024-12-06 19:26:48.012815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.134 [2024-12-06 19:26:48.012842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.134 [2024-12-06 19:26:48.012857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.134 [2024-12-06 19:26:48.012869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.134 [2024-12-06 19:26:48.012899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.134 qpair failed and we were unable to recover it. 00:28:03.134 [2024-12-06 19:26:48.022674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.134 [2024-12-06 19:26:48.022778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.134 [2024-12-06 19:26:48.022805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.134 [2024-12-06 19:26:48.022820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.134 [2024-12-06 19:26:48.022833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.134 [2024-12-06 19:26:48.022863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.134 qpair failed and we were unable to recover it. 00:28:03.134 [2024-12-06 19:26:48.032771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.134 [2024-12-06 19:26:48.032867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.134 [2024-12-06 19:26:48.032892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.134 [2024-12-06 19:26:48.032907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.134 [2024-12-06 19:26:48.032920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.134 [2024-12-06 19:26:48.032950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.134 qpair failed and we were unable to recover it. 00:28:03.134 [2024-12-06 19:26:48.042870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.134 [2024-12-06 19:26:48.043013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.134 [2024-12-06 19:26:48.043040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.134 [2024-12-06 19:26:48.043070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.134 [2024-12-06 19:26:48.043092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.134 [2024-12-06 19:26:48.043120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.134 qpair failed and we were unable to recover it. 00:28:03.134 [2024-12-06 19:26:48.052819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.134 [2024-12-06 19:26:48.052940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.134 [2024-12-06 19:26:48.052964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.134 [2024-12-06 19:26:48.052979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.134 [2024-12-06 19:26:48.052992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.134 [2024-12-06 19:26:48.053021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.134 qpair failed and we were unable to recover it. 00:28:03.134 [2024-12-06 19:26:48.062821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.134 [2024-12-06 19:26:48.062906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.134 [2024-12-06 19:26:48.062933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.134 [2024-12-06 19:26:48.062949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.134 [2024-12-06 19:26:48.062961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.134 [2024-12-06 19:26:48.062990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.134 qpair failed and we were unable to recover it. 00:28:03.134 [2024-12-06 19:26:48.072875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.134 [2024-12-06 19:26:48.072963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.134 [2024-12-06 19:26:48.072987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.134 [2024-12-06 19:26:48.073022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.135 [2024-12-06 19:26:48.073036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.135 [2024-12-06 19:26:48.073065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.135 qpair failed and we were unable to recover it. 00:28:03.135 [2024-12-06 19:26:48.082942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.135 [2024-12-06 19:26:48.083066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.135 [2024-12-06 19:26:48.083091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.135 [2024-12-06 19:26:48.083106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.135 [2024-12-06 19:26:48.083118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.135 [2024-12-06 19:26:48.083147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.135 qpair failed and we were unable to recover it. 00:28:03.135 [2024-12-06 19:26:48.093047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.135 [2024-12-06 19:26:48.093142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.135 [2024-12-06 19:26:48.093177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.135 [2024-12-06 19:26:48.093191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.135 [2024-12-06 19:26:48.093204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.135 [2024-12-06 19:26:48.093232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.135 qpair failed and we were unable to recover it. 00:28:03.135 [2024-12-06 19:26:48.102982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.135 [2024-12-06 19:26:48.103090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.135 [2024-12-06 19:26:48.103113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.135 [2024-12-06 19:26:48.103127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.135 [2024-12-06 19:26:48.103139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.135 [2024-12-06 19:26:48.103168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.135 qpair failed and we were unable to recover it. 00:28:03.135 [2024-12-06 19:26:48.113027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.135 [2024-12-06 19:26:48.113117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.135 [2024-12-06 19:26:48.113141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.135 [2024-12-06 19:26:48.113155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.135 [2024-12-06 19:26:48.113167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.135 [2024-12-06 19:26:48.113202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.135 qpair failed and we were unable to recover it. 00:28:03.135 [2024-12-06 19:26:48.123079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.135 [2024-12-06 19:26:48.123186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.135 [2024-12-06 19:26:48.123211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.135 [2024-12-06 19:26:48.123225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.135 [2024-12-06 19:26:48.123238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.135 [2024-12-06 19:26:48.123267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.135 qpair failed and we were unable to recover it. 00:28:03.135 [2024-12-06 19:26:48.133093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.135 [2024-12-06 19:26:48.133177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.135 [2024-12-06 19:26:48.133202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.135 [2024-12-06 19:26:48.133216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.135 [2024-12-06 19:26:48.133228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.135 [2024-12-06 19:26:48.133257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.135 qpair failed and we were unable to recover it. 00:28:03.135 [2024-12-06 19:26:48.143123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.135 [2024-12-06 19:26:48.143211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.135 [2024-12-06 19:26:48.143235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.135 [2024-12-06 19:26:48.143249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.135 [2024-12-06 19:26:48.143261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.135 [2024-12-06 19:26:48.143290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.135 qpair failed and we were unable to recover it. 00:28:03.135 [2024-12-06 19:26:48.153119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.135 [2024-12-06 19:26:48.153241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.135 [2024-12-06 19:26:48.153267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.135 [2024-12-06 19:26:48.153282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.135 [2024-12-06 19:26:48.153294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.135 [2024-12-06 19:26:48.153322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.135 qpair failed and we were unable to recover it. 00:28:03.135 [2024-12-06 19:26:48.163158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.135 [2024-12-06 19:26:48.163258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.135 [2024-12-06 19:26:48.163284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.135 [2024-12-06 19:26:48.163298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.135 [2024-12-06 19:26:48.163311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.135 [2024-12-06 19:26:48.163340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.135 qpair failed and we were unable to recover it. 00:28:03.135 [2024-12-06 19:26:48.173165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.135 [2024-12-06 19:26:48.173244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.135 [2024-12-06 19:26:48.173268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.135 [2024-12-06 19:26:48.173282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.135 [2024-12-06 19:26:48.173295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.135 [2024-12-06 19:26:48.173323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.135 qpair failed and we were unable to recover it. 00:28:03.394 [2024-12-06 19:26:48.183159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.394 [2024-12-06 19:26:48.183243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.394 [2024-12-06 19:26:48.183267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.394 [2024-12-06 19:26:48.183281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.394 [2024-12-06 19:26:48.183294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.394 [2024-12-06 19:26:48.183322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.394 qpair failed and we were unable to recover it. 00:28:03.394 [2024-12-06 19:26:48.193202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.394 [2024-12-06 19:26:48.193318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.394 [2024-12-06 19:26:48.193343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.394 [2024-12-06 19:26:48.193358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.394 [2024-12-06 19:26:48.193371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.394 [2024-12-06 19:26:48.193398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.394 qpair failed and we were unable to recover it. 00:28:03.394 [2024-12-06 19:26:48.203271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.394 [2024-12-06 19:26:48.203393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.394 [2024-12-06 19:26:48.203422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.394 [2024-12-06 19:26:48.203438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.394 [2024-12-06 19:26:48.203450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.394 [2024-12-06 19:26:48.203479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.394 qpair failed and we were unable to recover it. 00:28:03.394 [2024-12-06 19:26:48.213236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.394 [2024-12-06 19:26:48.213322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.394 [2024-12-06 19:26:48.213354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.394 [2024-12-06 19:26:48.213368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.395 [2024-12-06 19:26:48.213380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.395 [2024-12-06 19:26:48.213408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.395 qpair failed and we were unable to recover it. 00:28:03.395 [2024-12-06 19:26:48.223320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.395 [2024-12-06 19:26:48.223405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.395 [2024-12-06 19:26:48.223428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.395 [2024-12-06 19:26:48.223442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.395 [2024-12-06 19:26:48.223454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.395 [2024-12-06 19:26:48.223483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.395 qpair failed and we were unable to recover it. 00:28:03.395 [2024-12-06 19:26:48.233333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.395 [2024-12-06 19:26:48.233425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.395 [2024-12-06 19:26:48.233448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.395 [2024-12-06 19:26:48.233463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.395 [2024-12-06 19:26:48.233475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.395 [2024-12-06 19:26:48.233504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.395 qpair failed and we were unable to recover it. 00:28:03.395 [2024-12-06 19:26:48.243393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.395 [2024-12-06 19:26:48.243485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.395 [2024-12-06 19:26:48.243508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.395 [2024-12-06 19:26:48.243523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.395 [2024-12-06 19:26:48.243535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.395 [2024-12-06 19:26:48.243568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.395 qpair failed and we were unable to recover it. 00:28:03.395 [2024-12-06 19:26:48.253399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.395 [2024-12-06 19:26:48.253495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.395 [2024-12-06 19:26:48.253519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.395 [2024-12-06 19:26:48.253534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.395 [2024-12-06 19:26:48.253546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.395 [2024-12-06 19:26:48.253574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.395 qpair failed and we were unable to recover it. 00:28:03.395 [2024-12-06 19:26:48.263375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.395 [2024-12-06 19:26:48.263460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.395 [2024-12-06 19:26:48.263484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.395 [2024-12-06 19:26:48.263499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.395 [2024-12-06 19:26:48.263511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.395 [2024-12-06 19:26:48.263538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.395 qpair failed and we were unable to recover it. 00:28:03.395 [2024-12-06 19:26:48.273450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.395 [2024-12-06 19:26:48.273530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.395 [2024-12-06 19:26:48.273554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.395 [2024-12-06 19:26:48.273570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.395 [2024-12-06 19:26:48.273582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.395 [2024-12-06 19:26:48.273611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.395 qpair failed and we were unable to recover it. 00:28:03.395 [2024-12-06 19:26:48.283448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.395 [2024-12-06 19:26:48.283554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.395 [2024-12-06 19:26:48.283579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.395 [2024-12-06 19:26:48.283594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.395 [2024-12-06 19:26:48.283606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.395 [2024-12-06 19:26:48.283634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.395 qpair failed and we were unable to recover it. 00:28:03.395 [2024-12-06 19:26:48.293500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.395 [2024-12-06 19:26:48.293594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.395 [2024-12-06 19:26:48.293618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.395 [2024-12-06 19:26:48.293632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.395 [2024-12-06 19:26:48.293644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.395 [2024-12-06 19:26:48.293672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.395 qpair failed and we were unable to recover it. 00:28:03.395 [2024-12-06 19:26:48.303553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.395 [2024-12-06 19:26:48.303640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.395 [2024-12-06 19:26:48.303665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.395 [2024-12-06 19:26:48.303679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.395 [2024-12-06 19:26:48.303691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.395 [2024-12-06 19:26:48.303729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.395 qpair failed and we were unable to recover it. 00:28:03.395 [2024-12-06 19:26:48.313525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.395 [2024-12-06 19:26:48.313607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.395 [2024-12-06 19:26:48.313631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.395 [2024-12-06 19:26:48.313646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.395 [2024-12-06 19:26:48.313658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.395 [2024-12-06 19:26:48.313688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.395 qpair failed and we were unable to recover it. 00:28:03.395 [2024-12-06 19:26:48.323537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.395 [2024-12-06 19:26:48.323630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.395 [2024-12-06 19:26:48.323655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.395 [2024-12-06 19:26:48.323670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.395 [2024-12-06 19:26:48.323682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.395 [2024-12-06 19:26:48.323710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.395 qpair failed and we were unable to recover it. 00:28:03.395 [2024-12-06 19:26:48.333565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.395 [2024-12-06 19:26:48.333650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.395 [2024-12-06 19:26:48.333679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.395 [2024-12-06 19:26:48.333694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.395 [2024-12-06 19:26:48.333730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.395 [2024-12-06 19:26:48.333773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.395 qpair failed and we were unable to recover it. 00:28:03.395 [2024-12-06 19:26:48.343585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.395 [2024-12-06 19:26:48.343733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.395 [2024-12-06 19:26:48.343767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.395 [2024-12-06 19:26:48.343783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.395 [2024-12-06 19:26:48.343796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.395 [2024-12-06 19:26:48.343826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.396 qpair failed and we were unable to recover it. 00:28:03.396 [2024-12-06 19:26:48.353600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.396 [2024-12-06 19:26:48.353682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.396 [2024-12-06 19:26:48.353732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.396 [2024-12-06 19:26:48.353750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.396 [2024-12-06 19:26:48.353763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.396 [2024-12-06 19:26:48.353793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.396 qpair failed and we were unable to recover it. 00:28:03.396 [2024-12-06 19:26:48.363687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.396 [2024-12-06 19:26:48.363810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.396 [2024-12-06 19:26:48.363834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.396 [2024-12-06 19:26:48.363849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.396 [2024-12-06 19:26:48.363862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.396 [2024-12-06 19:26:48.363892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.396 qpair failed and we were unable to recover it. 00:28:03.396 [2024-12-06 19:26:48.373697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.396 [2024-12-06 19:26:48.373863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.396 [2024-12-06 19:26:48.373889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.396 [2024-12-06 19:26:48.373904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.396 [2024-12-06 19:26:48.373917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.396 [2024-12-06 19:26:48.373952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.396 qpair failed and we were unable to recover it. 00:28:03.396 [2024-12-06 19:26:48.383693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.396 [2024-12-06 19:26:48.383804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.396 [2024-12-06 19:26:48.383828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.396 [2024-12-06 19:26:48.383842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.396 [2024-12-06 19:26:48.383855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.396 [2024-12-06 19:26:48.383884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.396 qpair failed and we were unable to recover it. 00:28:03.396 [2024-12-06 19:26:48.393741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.396 [2024-12-06 19:26:48.393841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.396 [2024-12-06 19:26:48.393868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.396 [2024-12-06 19:26:48.393884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.396 [2024-12-06 19:26:48.393896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.396 [2024-12-06 19:26:48.393926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.396 qpair failed and we were unable to recover it. 00:28:03.396 [2024-12-06 19:26:48.403816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.396 [2024-12-06 19:26:48.403965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.396 [2024-12-06 19:26:48.403992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.396 [2024-12-06 19:26:48.404007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.396 [2024-12-06 19:26:48.404035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.396 [2024-12-06 19:26:48.404064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.396 qpair failed and we were unable to recover it. 00:28:03.396 [2024-12-06 19:26:48.413793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.396 [2024-12-06 19:26:48.413895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.396 [2024-12-06 19:26:48.413919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.396 [2024-12-06 19:26:48.413934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.396 [2024-12-06 19:26:48.413947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.396 [2024-12-06 19:26:48.413977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.396 qpair failed and we were unable to recover it. 00:28:03.396 [2024-12-06 19:26:48.423856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.396 [2024-12-06 19:26:48.423947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.396 [2024-12-06 19:26:48.423971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.396 [2024-12-06 19:26:48.423985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.396 [2024-12-06 19:26:48.423998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.396 [2024-12-06 19:26:48.424043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.396 qpair failed and we were unable to recover it. 00:28:03.396 [2024-12-06 19:26:48.433879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.396 [2024-12-06 19:26:48.433967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.396 [2024-12-06 19:26:48.433992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.396 [2024-12-06 19:26:48.434021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.396 [2024-12-06 19:26:48.434034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.396 [2024-12-06 19:26:48.434063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.396 qpair failed and we were unable to recover it. 00:28:03.655 [2024-12-06 19:26:48.443979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.655 [2024-12-06 19:26:48.444100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.655 [2024-12-06 19:26:48.444124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.655 [2024-12-06 19:26:48.444138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.655 [2024-12-06 19:26:48.444151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.655 [2024-12-06 19:26:48.444179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-12-06 19:26:48.454015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.655 [2024-12-06 19:26:48.454131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.655 [2024-12-06 19:26:48.454154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.655 [2024-12-06 19:26:48.454169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.655 [2024-12-06 19:26:48.454181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.655 [2024-12-06 19:26:48.454210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-12-06 19:26:48.463988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.655 [2024-12-06 19:26:48.464089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.655 [2024-12-06 19:26:48.464117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.655 [2024-12-06 19:26:48.464132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.655 [2024-12-06 19:26:48.464145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.655 [2024-12-06 19:26:48.464175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-12-06 19:26:48.474010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.655 [2024-12-06 19:26:48.474115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.655 [2024-12-06 19:26:48.474138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.655 [2024-12-06 19:26:48.474153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.655 [2024-12-06 19:26:48.474166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.655 [2024-12-06 19:26:48.474194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-12-06 19:26:48.484078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.655 [2024-12-06 19:26:48.484170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.655 [2024-12-06 19:26:48.484194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.655 [2024-12-06 19:26:48.484209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.655 [2024-12-06 19:26:48.484221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.655 [2024-12-06 19:26:48.484250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-12-06 19:26:48.494095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.655 [2024-12-06 19:26:48.494177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.655 [2024-12-06 19:26:48.494201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.655 [2024-12-06 19:26:48.494215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.655 [2024-12-06 19:26:48.494227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.655 [2024-12-06 19:26:48.494256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-12-06 19:26:48.504117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.655 [2024-12-06 19:26:48.504202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.655 [2024-12-06 19:26:48.504227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.655 [2024-12-06 19:26:48.504242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.655 [2024-12-06 19:26:48.504255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.655 [2024-12-06 19:26:48.504288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.655 qpair failed and we were unable to recover it. 00:28:03.655 [2024-12-06 19:26:48.514094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.655 [2024-12-06 19:26:48.514178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.655 [2024-12-06 19:26:48.514201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.656 [2024-12-06 19:26:48.514215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.656 [2024-12-06 19:26:48.514228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.656 [2024-12-06 19:26:48.514256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-12-06 19:26:48.524168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.656 [2024-12-06 19:26:48.524305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.656 [2024-12-06 19:26:48.524329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.656 [2024-12-06 19:26:48.524344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.656 [2024-12-06 19:26:48.524356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.656 [2024-12-06 19:26:48.524385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-12-06 19:26:48.534186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.656 [2024-12-06 19:26:48.534277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.656 [2024-12-06 19:26:48.534301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.656 [2024-12-06 19:26:48.534316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.656 [2024-12-06 19:26:48.534328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.656 [2024-12-06 19:26:48.534356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-12-06 19:26:48.544246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.656 [2024-12-06 19:26:48.544363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.656 [2024-12-06 19:26:48.544388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.656 [2024-12-06 19:26:48.544403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.656 [2024-12-06 19:26:48.544415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.656 [2024-12-06 19:26:48.544444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-12-06 19:26:48.554219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.656 [2024-12-06 19:26:48.554305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.656 [2024-12-06 19:26:48.554329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.656 [2024-12-06 19:26:48.554343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.656 [2024-12-06 19:26:48.554371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.656 [2024-12-06 19:26:48.554401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-12-06 19:26:48.564322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.656 [2024-12-06 19:26:48.564445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.656 [2024-12-06 19:26:48.564469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.656 [2024-12-06 19:26:48.564484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.656 [2024-12-06 19:26:48.564497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.656 [2024-12-06 19:26:48.564525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-12-06 19:26:48.574291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.656 [2024-12-06 19:26:48.574375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.656 [2024-12-06 19:26:48.574399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.656 [2024-12-06 19:26:48.574413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.656 [2024-12-06 19:26:48.574425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.656 [2024-12-06 19:26:48.574454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-12-06 19:26:48.584309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.656 [2024-12-06 19:26:48.584396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.656 [2024-12-06 19:26:48.584420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.656 [2024-12-06 19:26:48.584435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.656 [2024-12-06 19:26:48.584447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.656 [2024-12-06 19:26:48.584476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-12-06 19:26:48.594354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.656 [2024-12-06 19:26:48.594485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.656 [2024-12-06 19:26:48.594514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.656 [2024-12-06 19:26:48.594530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.656 [2024-12-06 19:26:48.594542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.656 [2024-12-06 19:26:48.594571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-12-06 19:26:48.604397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.656 [2024-12-06 19:26:48.604489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.656 [2024-12-06 19:26:48.604513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.656 [2024-12-06 19:26:48.604527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.656 [2024-12-06 19:26:48.604540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.656 [2024-12-06 19:26:48.604569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-12-06 19:26:48.614397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.656 [2024-12-06 19:26:48.614504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.656 [2024-12-06 19:26:48.614527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.656 [2024-12-06 19:26:48.614542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.656 [2024-12-06 19:26:48.614555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.656 [2024-12-06 19:26:48.614584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-12-06 19:26:48.624480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.656 [2024-12-06 19:26:48.624572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.656 [2024-12-06 19:26:48.624596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.656 [2024-12-06 19:26:48.624610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.656 [2024-12-06 19:26:48.624623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.656 [2024-12-06 19:26:48.624652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-12-06 19:26:48.634472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.656 [2024-12-06 19:26:48.634588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.656 [2024-12-06 19:26:48.634612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.656 [2024-12-06 19:26:48.634627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.656 [2024-12-06 19:26:48.634645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.656 [2024-12-06 19:26:48.634675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.656 qpair failed and we were unable to recover it. 00:28:03.656 [2024-12-06 19:26:48.644485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.656 [2024-12-06 19:26:48.644579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.656 [2024-12-06 19:26:48.644603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.656 [2024-12-06 19:26:48.644617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.657 [2024-12-06 19:26:48.644630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.657 [2024-12-06 19:26:48.644659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-12-06 19:26:48.654482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.657 [2024-12-06 19:26:48.654569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.657 [2024-12-06 19:26:48.654592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.657 [2024-12-06 19:26:48.654606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.657 [2024-12-06 19:26:48.654618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.657 [2024-12-06 19:26:48.654646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-12-06 19:26:48.664495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.657 [2024-12-06 19:26:48.664576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.657 [2024-12-06 19:26:48.664600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.657 [2024-12-06 19:26:48.664615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.657 [2024-12-06 19:26:48.664628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.657 [2024-12-06 19:26:48.664657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-12-06 19:26:48.674557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.657 [2024-12-06 19:26:48.674670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.657 [2024-12-06 19:26:48.674697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.657 [2024-12-06 19:26:48.674740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.657 [2024-12-06 19:26:48.674755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.657 [2024-12-06 19:26:48.674785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-12-06 19:26:48.684684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.657 [2024-12-06 19:26:48.684803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.657 [2024-12-06 19:26:48.684828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.657 [2024-12-06 19:26:48.684843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.657 [2024-12-06 19:26:48.684856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.657 [2024-12-06 19:26:48.684886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.657 [2024-12-06 19:26:48.694595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.657 [2024-12-06 19:26:48.694680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.657 [2024-12-06 19:26:48.694718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.657 [2024-12-06 19:26:48.694746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.657 [2024-12-06 19:26:48.694760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.657 [2024-12-06 19:26:48.694790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.657 qpair failed and we were unable to recover it. 00:28:03.916 [2024-12-06 19:26:48.704606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.916 [2024-12-06 19:26:48.704695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.916 [2024-12-06 19:26:48.704747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.916 [2024-12-06 19:26:48.704763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.916 [2024-12-06 19:26:48.704776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.916 [2024-12-06 19:26:48.704806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.916 qpair failed and we were unable to recover it. 00:28:03.916 [2024-12-06 19:26:48.714694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.916 [2024-12-06 19:26:48.714802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.916 [2024-12-06 19:26:48.714830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.916 [2024-12-06 19:26:48.714845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.916 [2024-12-06 19:26:48.714858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.916 [2024-12-06 19:26:48.714888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.916 qpair failed and we were unable to recover it. 00:28:03.916 [2024-12-06 19:26:48.724773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.916 [2024-12-06 19:26:48.724912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.916 [2024-12-06 19:26:48.724943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.916 [2024-12-06 19:26:48.724958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.916 [2024-12-06 19:26:48.724971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.916 [2024-12-06 19:26:48.725001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.916 qpair failed and we were unable to recover it. 00:28:03.916 [2024-12-06 19:26:48.734679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.916 [2024-12-06 19:26:48.734801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.916 [2024-12-06 19:26:48.734827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.916 [2024-12-06 19:26:48.734842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.916 [2024-12-06 19:26:48.734855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.916 [2024-12-06 19:26:48.734884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.916 qpair failed and we were unable to recover it. 00:28:03.916 [2024-12-06 19:26:48.744741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.916 [2024-12-06 19:26:48.744833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.916 [2024-12-06 19:26:48.744858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.916 [2024-12-06 19:26:48.744873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.916 [2024-12-06 19:26:48.744885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.916 [2024-12-06 19:26:48.744915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.916 qpair failed and we were unable to recover it. 00:28:03.916 [2024-12-06 19:26:48.754779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.916 [2024-12-06 19:26:48.754879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.916 [2024-12-06 19:26:48.754904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.916 [2024-12-06 19:26:48.754919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.916 [2024-12-06 19:26:48.754932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.916 [2024-12-06 19:26:48.754963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.916 qpair failed and we were unable to recover it. 00:28:03.916 [2024-12-06 19:26:48.764803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.916 [2024-12-06 19:26:48.764893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.916 [2024-12-06 19:26:48.764917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.916 [2024-12-06 19:26:48.764932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.917 [2024-12-06 19:26:48.764953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.917 [2024-12-06 19:26:48.764984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.917 qpair failed and we were unable to recover it. 00:28:03.917 [2024-12-06 19:26:48.774848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.917 [2024-12-06 19:26:48.774962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.917 [2024-12-06 19:26:48.775000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.917 [2024-12-06 19:26:48.775015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.917 [2024-12-06 19:26:48.775027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.917 [2024-12-06 19:26:48.775061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.917 qpair failed and we were unable to recover it. 00:28:03.917 [2024-12-06 19:26:48.784938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.917 [2024-12-06 19:26:48.785037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.917 [2024-12-06 19:26:48.785062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.917 [2024-12-06 19:26:48.785076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.917 [2024-12-06 19:26:48.785088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.917 [2024-12-06 19:26:48.785117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.917 qpair failed and we were unable to recover it. 00:28:03.917 [2024-12-06 19:26:48.794891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.917 [2024-12-06 19:26:48.794970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.917 [2024-12-06 19:26:48.794997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.917 [2024-12-06 19:26:48.795026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.917 [2024-12-06 19:26:48.795039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.917 [2024-12-06 19:26:48.795068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.917 qpair failed and we were unable to recover it. 00:28:03.917 [2024-12-06 19:26:48.804919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.917 [2024-12-06 19:26:48.805029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.917 [2024-12-06 19:26:48.805053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.917 [2024-12-06 19:26:48.805068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.917 [2024-12-06 19:26:48.805080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.917 [2024-12-06 19:26:48.805124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.917 qpair failed and we were unable to recover it. 00:28:03.917 [2024-12-06 19:26:48.814971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.917 [2024-12-06 19:26:48.815076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.917 [2024-12-06 19:26:48.815101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.917 [2024-12-06 19:26:48.815116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.917 [2024-12-06 19:26:48.815128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.917 [2024-12-06 19:26:48.815168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.917 qpair failed and we were unable to recover it. 00:28:03.917 [2024-12-06 19:26:48.825038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.917 [2024-12-06 19:26:48.825126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.917 [2024-12-06 19:26:48.825150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.917 [2024-12-06 19:26:48.825164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.917 [2024-12-06 19:26:48.825176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.917 [2024-12-06 19:26:48.825206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.917 qpair failed and we were unable to recover it. 00:28:03.917 [2024-12-06 19:26:48.835052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.917 [2024-12-06 19:26:48.835177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.917 [2024-12-06 19:26:48.835201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.917 [2024-12-06 19:26:48.835216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.917 [2024-12-06 19:26:48.835228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.917 [2024-12-06 19:26:48.835257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.917 qpair failed and we were unable to recover it. 00:28:03.917 [2024-12-06 19:26:48.845111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.917 [2024-12-06 19:26:48.845206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.917 [2024-12-06 19:26:48.845230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.917 [2024-12-06 19:26:48.845244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.917 [2024-12-06 19:26:48.845257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.917 [2024-12-06 19:26:48.845285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.917 qpair failed and we were unable to recover it. 00:28:03.917 [2024-12-06 19:26:48.855111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.917 [2024-12-06 19:26:48.855204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.917 [2024-12-06 19:26:48.855232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.917 [2024-12-06 19:26:48.855248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.917 [2024-12-06 19:26:48.855261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.917 [2024-12-06 19:26:48.855290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.917 qpair failed and we were unable to recover it. 00:28:03.917 [2024-12-06 19:26:48.865097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.917 [2024-12-06 19:26:48.865199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.917 [2024-12-06 19:26:48.865225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.917 [2024-12-06 19:26:48.865240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.917 [2024-12-06 19:26:48.865252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.917 [2024-12-06 19:26:48.865280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.917 qpair failed and we were unable to recover it. 00:28:03.917 [2024-12-06 19:26:48.875154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.917 [2024-12-06 19:26:48.875235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.917 [2024-12-06 19:26:48.875259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.917 [2024-12-06 19:26:48.875273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.917 [2024-12-06 19:26:48.875285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.917 [2024-12-06 19:26:48.875314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.917 qpair failed and we were unable to recover it. 00:28:03.917 [2024-12-06 19:26:48.885232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.918 [2024-12-06 19:26:48.885322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.918 [2024-12-06 19:26:48.885345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.918 [2024-12-06 19:26:48.885359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.918 [2024-12-06 19:26:48.885372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.918 [2024-12-06 19:26:48.885401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.918 qpair failed and we were unable to recover it. 00:28:03.918 [2024-12-06 19:26:48.895178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.918 [2024-12-06 19:26:48.895303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.918 [2024-12-06 19:26:48.895326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.918 [2024-12-06 19:26:48.895341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.918 [2024-12-06 19:26:48.895359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.918 [2024-12-06 19:26:48.895388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.918 qpair failed and we were unable to recover it. 00:28:03.918 [2024-12-06 19:26:48.905265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.918 [2024-12-06 19:26:48.905350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.918 [2024-12-06 19:26:48.905374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.918 [2024-12-06 19:26:48.905388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.918 [2024-12-06 19:26:48.905400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.918 [2024-12-06 19:26:48.905430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.918 qpair failed and we were unable to recover it. 00:28:03.918 [2024-12-06 19:26:48.915267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.918 [2024-12-06 19:26:48.915369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.918 [2024-12-06 19:26:48.915393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.918 [2024-12-06 19:26:48.915407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.918 [2024-12-06 19:26:48.915419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.918 [2024-12-06 19:26:48.915448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.918 qpair failed and we were unable to recover it. 00:28:03.918 [2024-12-06 19:26:48.925411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.918 [2024-12-06 19:26:48.925531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.918 [2024-12-06 19:26:48.925555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.918 [2024-12-06 19:26:48.925569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.918 [2024-12-06 19:26:48.925581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.918 [2024-12-06 19:26:48.925611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.918 qpair failed and we were unable to recover it. 00:28:03.918 [2024-12-06 19:26:48.935343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.918 [2024-12-06 19:26:48.935433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.918 [2024-12-06 19:26:48.935458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.918 [2024-12-06 19:26:48.935473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.918 [2024-12-06 19:26:48.935485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.918 [2024-12-06 19:26:48.935514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.918 qpair failed and we were unable to recover it. 00:28:03.918 [2024-12-06 19:26:48.945369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.918 [2024-12-06 19:26:48.945497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.918 [2024-12-06 19:26:48.945521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.918 [2024-12-06 19:26:48.945536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.918 [2024-12-06 19:26:48.945548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.918 [2024-12-06 19:26:48.945577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.918 qpair failed and we were unable to recover it. 00:28:03.918 [2024-12-06 19:26:48.955395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:03.918 [2024-12-06 19:26:48.955488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:03.918 [2024-12-06 19:26:48.955511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:03.918 [2024-12-06 19:26:48.955526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:03.918 [2024-12-06 19:26:48.955538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:03.918 [2024-12-06 19:26:48.955567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.918 qpair failed and we were unable to recover it. 00:28:04.177 [2024-12-06 19:26:48.965459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.177 [2024-12-06 19:26:48.965579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.177 [2024-12-06 19:26:48.965604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.177 [2024-12-06 19:26:48.965619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.177 [2024-12-06 19:26:48.965631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.177 [2024-12-06 19:26:48.965660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.177 qpair failed and we were unable to recover it. 00:28:04.177 [2024-12-06 19:26:48.975436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.177 [2024-12-06 19:26:48.975525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.177 [2024-12-06 19:26:48.975548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.177 [2024-12-06 19:26:48.975563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.177 [2024-12-06 19:26:48.975576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.177 [2024-12-06 19:26:48.975604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.177 qpair failed and we were unable to recover it. 00:28:04.177 [2024-12-06 19:26:48.985512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.177 [2024-12-06 19:26:48.985642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.177 [2024-12-06 19:26:48.985673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.177 [2024-12-06 19:26:48.985689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.177 [2024-12-06 19:26:48.985718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.177 [2024-12-06 19:26:48.985758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.177 qpair failed and we were unable to recover it. 00:28:04.177 [2024-12-06 19:26:48.995478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.177 [2024-12-06 19:26:48.995563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.177 [2024-12-06 19:26:48.995587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.177 [2024-12-06 19:26:48.995602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.177 [2024-12-06 19:26:48.995615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.177 [2024-12-06 19:26:48.995643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.177 qpair failed and we were unable to recover it. 00:28:04.177 [2024-12-06 19:26:49.005535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.177 [2024-12-06 19:26:49.005643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.177 [2024-12-06 19:26:49.005669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.177 [2024-12-06 19:26:49.005684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.177 [2024-12-06 19:26:49.005697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.178 [2024-12-06 19:26:49.005759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.178 qpair failed and we were unable to recover it. 00:28:04.178 [2024-12-06 19:26:49.015556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.178 [2024-12-06 19:26:49.015644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.178 [2024-12-06 19:26:49.015669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.178 [2024-12-06 19:26:49.015683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.178 [2024-12-06 19:26:49.015696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.178 [2024-12-06 19:26:49.015762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.178 qpair failed and we were unable to recover it. 00:28:04.178 [2024-12-06 19:26:49.025586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.178 [2024-12-06 19:26:49.025680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.178 [2024-12-06 19:26:49.025718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.178 [2024-12-06 19:26:49.025743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.178 [2024-12-06 19:26:49.025762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.178 [2024-12-06 19:26:49.025795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.178 qpair failed and we were unable to recover it. 00:28:04.178 [2024-12-06 19:26:49.035694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.178 [2024-12-06 19:26:49.035827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.178 [2024-12-06 19:26:49.035851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.178 [2024-12-06 19:26:49.035867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.178 [2024-12-06 19:26:49.035881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.178 [2024-12-06 19:26:49.035911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.178 qpair failed and we were unable to recover it. 00:28:04.178 [2024-12-06 19:26:49.045678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.178 [2024-12-06 19:26:49.045835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.178 [2024-12-06 19:26:49.045861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.178 [2024-12-06 19:26:49.045877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.178 [2024-12-06 19:26:49.045890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.178 [2024-12-06 19:26:49.045920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.178 qpair failed and we were unable to recover it. 00:28:04.178 [2024-12-06 19:26:49.055690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.178 [2024-12-06 19:26:49.055826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.178 [2024-12-06 19:26:49.055853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.178 [2024-12-06 19:26:49.055869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.178 [2024-12-06 19:26:49.055881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.178 [2024-12-06 19:26:49.055913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.178 qpair failed and we were unable to recover it. 00:28:04.178 [2024-12-06 19:26:49.065694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.178 [2024-12-06 19:26:49.065821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.178 [2024-12-06 19:26:49.065846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.178 [2024-12-06 19:26:49.065861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.178 [2024-12-06 19:26:49.065875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.178 [2024-12-06 19:26:49.065905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.178 qpair failed and we were unable to recover it. 00:28:04.178 [2024-12-06 19:26:49.075734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.178 [2024-12-06 19:26:49.075822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.178 [2024-12-06 19:26:49.075848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.178 [2024-12-06 19:26:49.075864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.178 [2024-12-06 19:26:49.075877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.178 [2024-12-06 19:26:49.075907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.178 qpair failed and we were unable to recover it. 00:28:04.178 [2024-12-06 19:26:49.085767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.178 [2024-12-06 19:26:49.085887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.178 [2024-12-06 19:26:49.085912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.178 [2024-12-06 19:26:49.085927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.178 [2024-12-06 19:26:49.085941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.178 [2024-12-06 19:26:49.085971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.178 qpair failed and we were unable to recover it. 00:28:04.178 [2024-12-06 19:26:49.095798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.178 [2024-12-06 19:26:49.095894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.178 [2024-12-06 19:26:49.095919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.178 [2024-12-06 19:26:49.095934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.178 [2024-12-06 19:26:49.095947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.178 [2024-12-06 19:26:49.095978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.178 qpair failed and we were unable to recover it. 00:28:04.178 [2024-12-06 19:26:49.105782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.178 [2024-12-06 19:26:49.105887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.178 [2024-12-06 19:26:49.105911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.178 [2024-12-06 19:26:49.105926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.178 [2024-12-06 19:26:49.105939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.178 [2024-12-06 19:26:49.105971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.178 qpair failed and we were unable to recover it. 00:28:04.178 [2024-12-06 19:26:49.115878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.178 [2024-12-06 19:26:49.115981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.178 [2024-12-06 19:26:49.116012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.178 [2024-12-06 19:26:49.116027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.178 [2024-12-06 19:26:49.116040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.178 [2024-12-06 19:26:49.116085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.178 qpair failed and we were unable to recover it. 00:28:04.178 [2024-12-06 19:26:49.125902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.178 [2024-12-06 19:26:49.126013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.178 [2024-12-06 19:26:49.126039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.178 [2024-12-06 19:26:49.126054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.178 [2024-12-06 19:26:49.126067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.178 [2024-12-06 19:26:49.126096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.178 qpair failed and we were unable to recover it. 00:28:04.178 [2024-12-06 19:26:49.135901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.178 [2024-12-06 19:26:49.135993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.178 [2024-12-06 19:26:49.136017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.178 [2024-12-06 19:26:49.136032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.178 [2024-12-06 19:26:49.136044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.178 [2024-12-06 19:26:49.136073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.179 qpair failed and we were unable to recover it. 00:28:04.179 [2024-12-06 19:26:49.145942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.179 [2024-12-06 19:26:49.146065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.179 [2024-12-06 19:26:49.146089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.179 [2024-12-06 19:26:49.146103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.179 [2024-12-06 19:26:49.146116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.179 [2024-12-06 19:26:49.146146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.179 qpair failed and we were unable to recover it. 00:28:04.179 [2024-12-06 19:26:49.155966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.179 [2024-12-06 19:26:49.156082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.179 [2024-12-06 19:26:49.156105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.179 [2024-12-06 19:26:49.156120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.179 [2024-12-06 19:26:49.156138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.179 [2024-12-06 19:26:49.156168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.179 qpair failed and we were unable to recover it. 00:28:04.179 [2024-12-06 19:26:49.166036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.179 [2024-12-06 19:26:49.166172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.179 [2024-12-06 19:26:49.166196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.179 [2024-12-06 19:26:49.166211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.179 [2024-12-06 19:26:49.166223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.179 [2024-12-06 19:26:49.166253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.179 qpair failed and we were unable to recover it. 00:28:04.179 [2024-12-06 19:26:49.176063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.179 [2024-12-06 19:26:49.176167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.179 [2024-12-06 19:26:49.176192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.179 [2024-12-06 19:26:49.176207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.179 [2024-12-06 19:26:49.176219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.179 [2024-12-06 19:26:49.176259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.179 qpair failed and we were unable to recover it. 00:28:04.179 [2024-12-06 19:26:49.186119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.179 [2024-12-06 19:26:49.186230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.179 [2024-12-06 19:26:49.186255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.179 [2024-12-06 19:26:49.186269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.179 [2024-12-06 19:26:49.186282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.179 [2024-12-06 19:26:49.186310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.179 qpair failed and we were unable to recover it. 00:28:04.179 [2024-12-06 19:26:49.196098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.179 [2024-12-06 19:26:49.196185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.179 [2024-12-06 19:26:49.196209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.179 [2024-12-06 19:26:49.196224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.179 [2024-12-06 19:26:49.196236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.179 [2024-12-06 19:26:49.196265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.179 qpair failed and we were unable to recover it. 00:28:04.179 [2024-12-06 19:26:49.206144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.179 [2024-12-06 19:26:49.206235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.179 [2024-12-06 19:26:49.206259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.179 [2024-12-06 19:26:49.206273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.179 [2024-12-06 19:26:49.206286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.179 [2024-12-06 19:26:49.206314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.179 qpair failed and we were unable to recover it. 00:28:04.179 [2024-12-06 19:26:49.216101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.179 [2024-12-06 19:26:49.216186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.179 [2024-12-06 19:26:49.216210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.179 [2024-12-06 19:26:49.216225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.179 [2024-12-06 19:26:49.216237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.179 [2024-12-06 19:26:49.216266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.179 qpair failed and we were unable to recover it. 00:28:04.438 [2024-12-06 19:26:49.226155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.438 [2024-12-06 19:26:49.226246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.438 [2024-12-06 19:26:49.226270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.438 [2024-12-06 19:26:49.226286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.438 [2024-12-06 19:26:49.226314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.438 [2024-12-06 19:26:49.226343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.438 qpair failed and we were unable to recover it. 00:28:04.439 [2024-12-06 19:26:49.236206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.439 [2024-12-06 19:26:49.236329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.439 [2024-12-06 19:26:49.236352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.439 [2024-12-06 19:26:49.236367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.439 [2024-12-06 19:26:49.236379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.439 [2024-12-06 19:26:49.236409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.439 qpair failed and we were unable to recover it. 00:28:04.439 [2024-12-06 19:26:49.246284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.439 [2024-12-06 19:26:49.246379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.439 [2024-12-06 19:26:49.246407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.439 [2024-12-06 19:26:49.246422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.439 [2024-12-06 19:26:49.246435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.439 [2024-12-06 19:26:49.246464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.439 qpair failed and we were unable to recover it. 00:28:04.439 [2024-12-06 19:26:49.256256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.439 [2024-12-06 19:26:49.256342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.439 [2024-12-06 19:26:49.256365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.439 [2024-12-06 19:26:49.256380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.439 [2024-12-06 19:26:49.256392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.439 [2024-12-06 19:26:49.256420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.439 qpair failed and we were unable to recover it. 00:28:04.439 [2024-12-06 19:26:49.266327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.439 [2024-12-06 19:26:49.266448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.439 [2024-12-06 19:26:49.266472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.439 [2024-12-06 19:26:49.266487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.439 [2024-12-06 19:26:49.266499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.439 [2024-12-06 19:26:49.266528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.439 qpair failed and we were unable to recover it. 00:28:04.439 [2024-12-06 19:26:49.276302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.439 [2024-12-06 19:26:49.276396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.439 [2024-12-06 19:26:49.276420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.439 [2024-12-06 19:26:49.276434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.439 [2024-12-06 19:26:49.276447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.439 [2024-12-06 19:26:49.276475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.439 qpair failed and we were unable to recover it. 00:28:04.439 [2024-12-06 19:26:49.286346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.439 [2024-12-06 19:26:49.286440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.439 [2024-12-06 19:26:49.286464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.439 [2024-12-06 19:26:49.286478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.439 [2024-12-06 19:26:49.286496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.439 [2024-12-06 19:26:49.286525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.439 qpair failed and we were unable to recover it. 00:28:04.439 [2024-12-06 19:26:49.296365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.439 [2024-12-06 19:26:49.296452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.439 [2024-12-06 19:26:49.296476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.439 [2024-12-06 19:26:49.296491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.439 [2024-12-06 19:26:49.296503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.439 [2024-12-06 19:26:49.296532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.439 qpair failed and we were unable to recover it. 00:28:04.439 [2024-12-06 19:26:49.306434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.439 [2024-12-06 19:26:49.306514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.439 [2024-12-06 19:26:49.306537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.439 [2024-12-06 19:26:49.306552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.439 [2024-12-06 19:26:49.306563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.439 [2024-12-06 19:26:49.306607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.439 qpair failed and we were unable to recover it. 00:28:04.439 [2024-12-06 19:26:49.316429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.439 [2024-12-06 19:26:49.316562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.439 [2024-12-06 19:26:49.316587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.439 [2024-12-06 19:26:49.316601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.439 [2024-12-06 19:26:49.316614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.439 [2024-12-06 19:26:49.316643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.439 qpair failed and we were unable to recover it. 00:28:04.439 [2024-12-06 19:26:49.326403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.439 [2024-12-06 19:26:49.326496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.439 [2024-12-06 19:26:49.326519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.439 [2024-12-06 19:26:49.326533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.439 [2024-12-06 19:26:49.326546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.439 [2024-12-06 19:26:49.326575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.439 qpair failed and we were unable to recover it. 00:28:04.439 [2024-12-06 19:26:49.336434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.439 [2024-12-06 19:26:49.336547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.439 [2024-12-06 19:26:49.336572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.439 [2024-12-06 19:26:49.336588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.439 [2024-12-06 19:26:49.336602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.439 [2024-12-06 19:26:49.336632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.439 qpair failed and we were unable to recover it. 00:28:04.439 [2024-12-06 19:26:49.346430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.439 [2024-12-06 19:26:49.346559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.440 [2024-12-06 19:26:49.346585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.440 [2024-12-06 19:26:49.346600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.440 [2024-12-06 19:26:49.346612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.440 [2024-12-06 19:26:49.346640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.440 qpair failed and we were unable to recover it. 00:28:04.440 [2024-12-06 19:26:49.356521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.440 [2024-12-06 19:26:49.356646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.440 [2024-12-06 19:26:49.356671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.440 [2024-12-06 19:26:49.356687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.440 [2024-12-06 19:26:49.356698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.440 [2024-12-06 19:26:49.356752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.440 qpair failed and we were unable to recover it. 00:28:04.440 [2024-12-06 19:26:49.366525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.440 [2024-12-06 19:26:49.366618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.440 [2024-12-06 19:26:49.366643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.440 [2024-12-06 19:26:49.366658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.440 [2024-12-06 19:26:49.366670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.440 [2024-12-06 19:26:49.366699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.440 qpair failed and we were unable to recover it. 00:28:04.440 [2024-12-06 19:26:49.376560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.440 [2024-12-06 19:26:49.376674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.440 [2024-12-06 19:26:49.376705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.440 [2024-12-06 19:26:49.376745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.440 [2024-12-06 19:26:49.376762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.440 [2024-12-06 19:26:49.376792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.440 qpair failed and we were unable to recover it. 00:28:04.440 [2024-12-06 19:26:49.386605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.440 [2024-12-06 19:26:49.386705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.440 [2024-12-06 19:26:49.386742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.440 [2024-12-06 19:26:49.386758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.440 [2024-12-06 19:26:49.386771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.440 [2024-12-06 19:26:49.386800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.440 qpair failed and we were unable to recover it. 00:28:04.440 [2024-12-06 19:26:49.396583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.440 [2024-12-06 19:26:49.396675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.440 [2024-12-06 19:26:49.396699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.440 [2024-12-06 19:26:49.396740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.440 [2024-12-06 19:26:49.396754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.440 [2024-12-06 19:26:49.396784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.440 qpair failed and we were unable to recover it. 00:28:04.440 [2024-12-06 19:26:49.406677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.440 [2024-12-06 19:26:49.406822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.440 [2024-12-06 19:26:49.406848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.440 [2024-12-06 19:26:49.406864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.440 [2024-12-06 19:26:49.406876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.440 [2024-12-06 19:26:49.406905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.440 qpair failed and we were unable to recover it. 00:28:04.440 [2024-12-06 19:26:49.416691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.440 [2024-12-06 19:26:49.416807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.440 [2024-12-06 19:26:49.416831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.440 [2024-12-06 19:26:49.416846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.440 [2024-12-06 19:26:49.416867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.440 [2024-12-06 19:26:49.416898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.440 qpair failed and we were unable to recover it. 00:28:04.440 [2024-12-06 19:26:49.426734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.440 [2024-12-06 19:26:49.426833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.440 [2024-12-06 19:26:49.426857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.440 [2024-12-06 19:26:49.426871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.440 [2024-12-06 19:26:49.426885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.440 [2024-12-06 19:26:49.426915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.440 qpair failed and we were unable to recover it. 00:28:04.440 [2024-12-06 19:26:49.436705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.440 [2024-12-06 19:26:49.436822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.440 [2024-12-06 19:26:49.436849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.440 [2024-12-06 19:26:49.436864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.440 [2024-12-06 19:26:49.436877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.440 [2024-12-06 19:26:49.436907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.440 qpair failed and we were unable to recover it. 00:28:04.440 [2024-12-06 19:26:49.446783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.440 [2024-12-06 19:26:49.446926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.440 [2024-12-06 19:26:49.446952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.440 [2024-12-06 19:26:49.446967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.440 [2024-12-06 19:26:49.446980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.440 [2024-12-06 19:26:49.447025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.440 qpair failed and we were unable to recover it. 00:28:04.440 [2024-12-06 19:26:49.456880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.440 [2024-12-06 19:26:49.456967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.440 [2024-12-06 19:26:49.456993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.440 [2024-12-06 19:26:49.457007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.440 [2024-12-06 19:26:49.457020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.440 [2024-12-06 19:26:49.457064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.440 qpair failed and we were unable to recover it. 00:28:04.440 [2024-12-06 19:26:49.466818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.440 [2024-12-06 19:26:49.466916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.440 [2024-12-06 19:26:49.466940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.440 [2024-12-06 19:26:49.466955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.440 [2024-12-06 19:26:49.466968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.440 [2024-12-06 19:26:49.466997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.440 qpair failed and we were unable to recover it. 00:28:04.440 [2024-12-06 19:26:49.476844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.440 [2024-12-06 19:26:49.476934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.440 [2024-12-06 19:26:49.476959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.440 [2024-12-06 19:26:49.476974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.441 [2024-12-06 19:26:49.476987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.441 [2024-12-06 19:26:49.477016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.441 qpair failed and we were unable to recover it. 00:28:04.699 [2024-12-06 19:26:49.486882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.699 [2024-12-06 19:26:49.486975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.699 [2024-12-06 19:26:49.487000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.699 [2024-12-06 19:26:49.487014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.699 [2024-12-06 19:26:49.487041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.699 [2024-12-06 19:26:49.487070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.699 qpair failed and we were unable to recover it. 00:28:04.699 [2024-12-06 19:26:49.496922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.699 [2024-12-06 19:26:49.497012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.699 [2024-12-06 19:26:49.497051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.699 [2024-12-06 19:26:49.497066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.699 [2024-12-06 19:26:49.497078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.699 [2024-12-06 19:26:49.497107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.699 qpair failed and we were unable to recover it. 00:28:04.699 [2024-12-06 19:26:49.506962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.699 [2024-12-06 19:26:49.507058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.699 [2024-12-06 19:26:49.507087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.699 [2024-12-06 19:26:49.507102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.699 [2024-12-06 19:26:49.507114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.699 [2024-12-06 19:26:49.507142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.699 qpair failed and we were unable to recover it. 00:28:04.699 [2024-12-06 19:26:49.516986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.699 [2024-12-06 19:26:49.517086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.699 [2024-12-06 19:26:49.517111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.699 [2024-12-06 19:26:49.517126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.699 [2024-12-06 19:26:49.517137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.699 [2024-12-06 19:26:49.517165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.699 qpair failed and we were unable to recover it. 00:28:04.699 [2024-12-06 19:26:49.527076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.699 [2024-12-06 19:26:49.527181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.699 [2024-12-06 19:26:49.527206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.699 [2024-12-06 19:26:49.527220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.699 [2024-12-06 19:26:49.527232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.699 [2024-12-06 19:26:49.527263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.699 qpair failed and we were unable to recover it. 00:28:04.699 [2024-12-06 19:26:49.537066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.699 [2024-12-06 19:26:49.537155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.699 [2024-12-06 19:26:49.537178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.699 [2024-12-06 19:26:49.537192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.699 [2024-12-06 19:26:49.537204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.699 [2024-12-06 19:26:49.537233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.699 qpair failed and we were unable to recover it. 00:28:04.699 [2024-12-06 19:26:49.547078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.699 [2024-12-06 19:26:49.547205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.699 [2024-12-06 19:26:49.547231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.699 [2024-12-06 19:26:49.547246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.699 [2024-12-06 19:26:49.547264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.699 [2024-12-06 19:26:49.547294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.699 qpair failed and we were unable to recover it. 00:28:04.699 [2024-12-06 19:26:49.557087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.699 [2024-12-06 19:26:49.557210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.699 [2024-12-06 19:26:49.557234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.699 [2024-12-06 19:26:49.557264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.699 [2024-12-06 19:26:49.557277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.699 [2024-12-06 19:26:49.557307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.699 qpair failed and we were unable to recover it. 00:28:04.699 [2024-12-06 19:26:49.567125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.699 [2024-12-06 19:26:49.567214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.699 [2024-12-06 19:26:49.567238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.699 [2024-12-06 19:26:49.567252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.699 [2024-12-06 19:26:49.567265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.699 [2024-12-06 19:26:49.567293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.699 qpair failed and we were unable to recover it. 00:28:04.699 [2024-12-06 19:26:49.577152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.700 [2024-12-06 19:26:49.577237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.700 [2024-12-06 19:26:49.577261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.700 [2024-12-06 19:26:49.577275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.700 [2024-12-06 19:26:49.577287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.700 [2024-12-06 19:26:49.577315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.700 qpair failed and we were unable to recover it. 00:28:04.700 [2024-12-06 19:26:49.587178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.700 [2024-12-06 19:26:49.587258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.700 [2024-12-06 19:26:49.587283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.700 [2024-12-06 19:26:49.587298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.700 [2024-12-06 19:26:49.587310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.700 [2024-12-06 19:26:49.587338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.700 qpair failed and we were unable to recover it. 00:28:04.700 [2024-12-06 19:26:49.597204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.700 [2024-12-06 19:26:49.597320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.700 [2024-12-06 19:26:49.597345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.700 [2024-12-06 19:26:49.597360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.700 [2024-12-06 19:26:49.597372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.700 [2024-12-06 19:26:49.597401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.700 qpair failed and we were unable to recover it. 00:28:04.700 [2024-12-06 19:26:49.607255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.700 [2024-12-06 19:26:49.607354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.700 [2024-12-06 19:26:49.607378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.700 [2024-12-06 19:26:49.607392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.700 [2024-12-06 19:26:49.607404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.700 [2024-12-06 19:26:49.607432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.700 qpair failed and we were unable to recover it. 00:28:04.700 [2024-12-06 19:26:49.617264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.700 [2024-12-06 19:26:49.617373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.700 [2024-12-06 19:26:49.617398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.700 [2024-12-06 19:26:49.617413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.700 [2024-12-06 19:26:49.617425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.700 [2024-12-06 19:26:49.617454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.700 qpair failed and we were unable to recover it. 00:28:04.700 [2024-12-06 19:26:49.627324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.700 [2024-12-06 19:26:49.627429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.700 [2024-12-06 19:26:49.627453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.700 [2024-12-06 19:26:49.627468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.700 [2024-12-06 19:26:49.627480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.700 [2024-12-06 19:26:49.627509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.700 qpair failed and we were unable to recover it. 00:28:04.700 [2024-12-06 19:26:49.637292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.700 [2024-12-06 19:26:49.637393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.700 [2024-12-06 19:26:49.637422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.700 [2024-12-06 19:26:49.637438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.700 [2024-12-06 19:26:49.637450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.700 [2024-12-06 19:26:49.637479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.700 qpair failed and we were unable to recover it. 00:28:04.700 [2024-12-06 19:26:49.647328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.700 [2024-12-06 19:26:49.647417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.700 [2024-12-06 19:26:49.647441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.700 [2024-12-06 19:26:49.647454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.700 [2024-12-06 19:26:49.647467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.700 [2024-12-06 19:26:49.647495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.700 qpair failed and we were unable to recover it. 00:28:04.700 [2024-12-06 19:26:49.657403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.700 [2024-12-06 19:26:49.657491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.700 [2024-12-06 19:26:49.657514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.700 [2024-12-06 19:26:49.657529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.700 [2024-12-06 19:26:49.657541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.700 [2024-12-06 19:26:49.657570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.700 qpair failed and we were unable to recover it. 00:28:04.700 [2024-12-06 19:26:49.667378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.700 [2024-12-06 19:26:49.667461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.700 [2024-12-06 19:26:49.667484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.700 [2024-12-06 19:26:49.667498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.700 [2024-12-06 19:26:49.667510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.700 [2024-12-06 19:26:49.667538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.700 qpair failed and we were unable to recover it. 00:28:04.700 [2024-12-06 19:26:49.677379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.700 [2024-12-06 19:26:49.677470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.700 [2024-12-06 19:26:49.677494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.700 [2024-12-06 19:26:49.677508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.700 [2024-12-06 19:26:49.677526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.700 [2024-12-06 19:26:49.677555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.700 qpair failed and we were unable to recover it. 00:28:04.700 [2024-12-06 19:26:49.687449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.700 [2024-12-06 19:26:49.687539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.700 [2024-12-06 19:26:49.687563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.700 [2024-12-06 19:26:49.687577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.700 [2024-12-06 19:26:49.687589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.700 [2024-12-06 19:26:49.687617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.700 qpair failed and we were unable to recover it. 00:28:04.700 [2024-12-06 19:26:49.697529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.700 [2024-12-06 19:26:49.697614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.700 [2024-12-06 19:26:49.697637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.700 [2024-12-06 19:26:49.697651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.700 [2024-12-06 19:26:49.697664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.700 [2024-12-06 19:26:49.697692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.700 qpair failed and we were unable to recover it. 00:28:04.700 [2024-12-06 19:26:49.707559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.700 [2024-12-06 19:26:49.707647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.701 [2024-12-06 19:26:49.707672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.701 [2024-12-06 19:26:49.707687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.701 [2024-12-06 19:26:49.707715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.701 [2024-12-06 19:26:49.707757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.701 qpair failed and we were unable to recover it. 00:28:04.701 [2024-12-06 19:26:49.717507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.701 [2024-12-06 19:26:49.717593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.701 [2024-12-06 19:26:49.717616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.701 [2024-12-06 19:26:49.717631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.701 [2024-12-06 19:26:49.717643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.701 [2024-12-06 19:26:49.717672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.701 qpair failed and we were unable to recover it. 00:28:04.701 [2024-12-06 19:26:49.727563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.701 [2024-12-06 19:26:49.727688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.701 [2024-12-06 19:26:49.727740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.701 [2024-12-06 19:26:49.727757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.701 [2024-12-06 19:26:49.727770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.701 [2024-12-06 19:26:49.727800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.701 qpair failed and we were unable to recover it. 00:28:04.701 [2024-12-06 19:26:49.737579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.701 [2024-12-06 19:26:49.737669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.701 [2024-12-06 19:26:49.737694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.701 [2024-12-06 19:26:49.737731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.701 [2024-12-06 19:26:49.737745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.701 [2024-12-06 19:26:49.737775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.701 qpair failed and we were unable to recover it. 00:28:04.959 [2024-12-06 19:26:49.747656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.959 [2024-12-06 19:26:49.747761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.959 [2024-12-06 19:26:49.747785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.959 [2024-12-06 19:26:49.747801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.959 [2024-12-06 19:26:49.747814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.959 [2024-12-06 19:26:49.747844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.959 qpair failed and we were unable to recover it. 00:28:04.959 [2024-12-06 19:26:49.757744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.959 [2024-12-06 19:26:49.757835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.959 [2024-12-06 19:26:49.757859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.959 [2024-12-06 19:26:49.757874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.959 [2024-12-06 19:26:49.757886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.959 [2024-12-06 19:26:49.757916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.959 qpair failed and we were unable to recover it. 00:28:04.959 [2024-12-06 19:26:49.767687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.959 [2024-12-06 19:26:49.767851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.959 [2024-12-06 19:26:49.767882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.959 [2024-12-06 19:26:49.767898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.959 [2024-12-06 19:26:49.767911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.959 [2024-12-06 19:26:49.767940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.959 qpair failed and we were unable to recover it. 00:28:04.959 [2024-12-06 19:26:49.777690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.959 [2024-12-06 19:26:49.777814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.959 [2024-12-06 19:26:49.777839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.959 [2024-12-06 19:26:49.777854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.959 [2024-12-06 19:26:49.777867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.959 [2024-12-06 19:26:49.777896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.959 qpair failed and we were unable to recover it. 00:28:04.959 [2024-12-06 19:26:49.787791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.959 [2024-12-06 19:26:49.787891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.959 [2024-12-06 19:26:49.787915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.959 [2024-12-06 19:26:49.787930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.959 [2024-12-06 19:26:49.787943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.959 [2024-12-06 19:26:49.787972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.959 qpair failed and we were unable to recover it. 00:28:04.959 [2024-12-06 19:26:49.797800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.959 [2024-12-06 19:26:49.797896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.959 [2024-12-06 19:26:49.797920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.959 [2024-12-06 19:26:49.797936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.959 [2024-12-06 19:26:49.797948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.959 [2024-12-06 19:26:49.797979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.959 qpair failed and we were unable to recover it. 00:28:04.959 [2024-12-06 19:26:49.807808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.959 [2024-12-06 19:26:49.807901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.959 [2024-12-06 19:26:49.807925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.959 [2024-12-06 19:26:49.807940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.959 [2024-12-06 19:26:49.807957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.959 [2024-12-06 19:26:49.807988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.959 qpair failed and we were unable to recover it. 00:28:04.959 [2024-12-06 19:26:49.817865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.959 [2024-12-06 19:26:49.817957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.959 [2024-12-06 19:26:49.817983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.959 [2024-12-06 19:26:49.817998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.959 [2024-12-06 19:26:49.818011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.959 [2024-12-06 19:26:49.818056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.959 qpair failed and we were unable to recover it. 00:28:04.959 [2024-12-06 19:26:49.827847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.959 [2024-12-06 19:26:49.827931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.959 [2024-12-06 19:26:49.827955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.959 [2024-12-06 19:26:49.827971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.959 [2024-12-06 19:26:49.827984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.959 [2024-12-06 19:26:49.828027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.959 qpair failed and we were unable to recover it. 00:28:04.959 [2024-12-06 19:26:49.837894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.959 [2024-12-06 19:26:49.837987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.959 [2024-12-06 19:26:49.838012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.959 [2024-12-06 19:26:49.838041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.959 [2024-12-06 19:26:49.838054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.959 [2024-12-06 19:26:49.838083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.959 qpair failed and we were unable to recover it. 00:28:04.959 [2024-12-06 19:26:49.847933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.960 [2024-12-06 19:26:49.848054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.960 [2024-12-06 19:26:49.848077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.960 [2024-12-06 19:26:49.848091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.960 [2024-12-06 19:26:49.848103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.960 [2024-12-06 19:26:49.848131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.960 qpair failed and we were unable to recover it. 00:28:04.960 [2024-12-06 19:26:49.857968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.960 [2024-12-06 19:26:49.858079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.960 [2024-12-06 19:26:49.858103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.960 [2024-12-06 19:26:49.858117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.960 [2024-12-06 19:26:49.858129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.960 [2024-12-06 19:26:49.858157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.960 qpair failed and we were unable to recover it. 00:28:04.960 [2024-12-06 19:26:49.867985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.960 [2024-12-06 19:26:49.868143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.960 [2024-12-06 19:26:49.868167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.960 [2024-12-06 19:26:49.868181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.960 [2024-12-06 19:26:49.868193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.960 [2024-12-06 19:26:49.868229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.960 qpair failed and we were unable to recover it. 00:28:04.960 [2024-12-06 19:26:49.878019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.960 [2024-12-06 19:26:49.878121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.960 [2024-12-06 19:26:49.878146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.960 [2024-12-06 19:26:49.878162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.960 [2024-12-06 19:26:49.878174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.960 [2024-12-06 19:26:49.878204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.960 qpair failed and we were unable to recover it. 00:28:04.960 [2024-12-06 19:26:49.888109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.960 [2024-12-06 19:26:49.888197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.960 [2024-12-06 19:26:49.888221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.960 [2024-12-06 19:26:49.888235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.960 [2024-12-06 19:26:49.888247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.960 [2024-12-06 19:26:49.888275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.960 qpair failed and we were unable to recover it. 00:28:04.960 [2024-12-06 19:26:49.898097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.960 [2024-12-06 19:26:49.898206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.960 [2024-12-06 19:26:49.898237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.960 [2024-12-06 19:26:49.898252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.960 [2024-12-06 19:26:49.898264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.960 [2024-12-06 19:26:49.898292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.960 qpair failed and we were unable to recover it. 00:28:04.960 [2024-12-06 19:26:49.908115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.960 [2024-12-06 19:26:49.908220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.960 [2024-12-06 19:26:49.908245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.960 [2024-12-06 19:26:49.908259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.960 [2024-12-06 19:26:49.908272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.960 [2024-12-06 19:26:49.908300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.960 qpair failed and we were unable to recover it. 00:28:04.960 [2024-12-06 19:26:49.918348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.960 [2024-12-06 19:26:49.918464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.960 [2024-12-06 19:26:49.918490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.960 [2024-12-06 19:26:49.918506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.960 [2024-12-06 19:26:49.918518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.960 [2024-12-06 19:26:49.918546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.960 qpair failed and we were unable to recover it. 00:28:04.960 [2024-12-06 19:26:49.928266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.960 [2024-12-06 19:26:49.928400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.960 [2024-12-06 19:26:49.928425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.960 [2024-12-06 19:26:49.928439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.960 [2024-12-06 19:26:49.928451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.960 [2024-12-06 19:26:49.928480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.960 qpair failed and we were unable to recover it. 00:28:04.960 [2024-12-06 19:26:49.938289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.960 [2024-12-06 19:26:49.938382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.960 [2024-12-06 19:26:49.938405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.960 [2024-12-06 19:26:49.938419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.960 [2024-12-06 19:26:49.938436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.960 [2024-12-06 19:26:49.938466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.960 qpair failed and we were unable to recover it. 00:28:04.960 [2024-12-06 19:26:49.948301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.960 [2024-12-06 19:26:49.948387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.960 [2024-12-06 19:26:49.948411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.960 [2024-12-06 19:26:49.948425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.960 [2024-12-06 19:26:49.948438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.960 [2024-12-06 19:26:49.948467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.960 qpair failed and we were unable to recover it. 00:28:04.960 [2024-12-06 19:26:49.958323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.960 [2024-12-06 19:26:49.958446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.960 [2024-12-06 19:26:49.958472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.960 [2024-12-06 19:26:49.958487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.960 [2024-12-06 19:26:49.958499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.960 [2024-12-06 19:26:49.958528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.960 qpair failed and we were unable to recover it. 00:28:04.960 [2024-12-06 19:26:49.968305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.960 [2024-12-06 19:26:49.968411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.960 [2024-12-06 19:26:49.968435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.960 [2024-12-06 19:26:49.968449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.960 [2024-12-06 19:26:49.968462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.960 [2024-12-06 19:26:49.968490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.960 qpair failed and we were unable to recover it. 00:28:04.960 [2024-12-06 19:26:49.978340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.961 [2024-12-06 19:26:49.978464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.961 [2024-12-06 19:26:49.978489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.961 [2024-12-06 19:26:49.978504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.961 [2024-12-06 19:26:49.978516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.961 [2024-12-06 19:26:49.978544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.961 qpair failed and we were unable to recover it. 00:28:04.961 [2024-12-06 19:26:49.988325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.961 [2024-12-06 19:26:49.988412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.961 [2024-12-06 19:26:49.988436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.961 [2024-12-06 19:26:49.988450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.961 [2024-12-06 19:26:49.988463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.961 [2024-12-06 19:26:49.988491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.961 qpair failed and we were unable to recover it. 00:28:04.961 [2024-12-06 19:26:49.998395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.961 [2024-12-06 19:26:49.998482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.961 [2024-12-06 19:26:49.998505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.961 [2024-12-06 19:26:49.998519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.961 [2024-12-06 19:26:49.998531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:04.961 [2024-12-06 19:26:49.998560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.961 qpair failed and we were unable to recover it. 00:28:05.220 [2024-12-06 19:26:50.008498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.220 [2024-12-06 19:26:50.008604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.220 [2024-12-06 19:26:50.008634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.220 [2024-12-06 19:26:50.008650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.220 [2024-12-06 19:26:50.008662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.220 [2024-12-06 19:26:50.008695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-12-06 19:26:50.018587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.220 [2024-12-06 19:26:50.018703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.220 [2024-12-06 19:26:50.018750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.220 [2024-12-06 19:26:50.018767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.220 [2024-12-06 19:26:50.018779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.220 [2024-12-06 19:26:50.018810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-12-06 19:26:50.028547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.220 [2024-12-06 19:26:50.028661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.220 [2024-12-06 19:26:50.028700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.220 [2024-12-06 19:26:50.028717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.220 [2024-12-06 19:26:50.028738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.220 [2024-12-06 19:26:50.028781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-12-06 19:26:50.038617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.220 [2024-12-06 19:26:50.038756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.220 [2024-12-06 19:26:50.038785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.220 [2024-12-06 19:26:50.038801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.220 [2024-12-06 19:26:50.038814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.220 [2024-12-06 19:26:50.038846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-12-06 19:26:50.048617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.220 [2024-12-06 19:26:50.048713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.220 [2024-12-06 19:26:50.048747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.220 [2024-12-06 19:26:50.048764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.220 [2024-12-06 19:26:50.048777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.220 [2024-12-06 19:26:50.048808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-12-06 19:26:50.058619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.220 [2024-12-06 19:26:50.058755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.220 [2024-12-06 19:26:50.058780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.220 [2024-12-06 19:26:50.058795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.220 [2024-12-06 19:26:50.058808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.220 [2024-12-06 19:26:50.058838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-12-06 19:26:50.068597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.220 [2024-12-06 19:26:50.068683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.220 [2024-12-06 19:26:50.068743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.220 [2024-12-06 19:26:50.068761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.220 [2024-12-06 19:26:50.068785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.220 [2024-12-06 19:26:50.068817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-12-06 19:26:50.078615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.220 [2024-12-06 19:26:50.078730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.220 [2024-12-06 19:26:50.078761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.220 [2024-12-06 19:26:50.078776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.220 [2024-12-06 19:26:50.078789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.220 [2024-12-06 19:26:50.078819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.220 qpair failed and we were unable to recover it. 00:28:05.220 [2024-12-06 19:26:50.088660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.220 [2024-12-06 19:26:50.088766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.220 [2024-12-06 19:26:50.088791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.220 [2024-12-06 19:26:50.088807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.221 [2024-12-06 19:26:50.088820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.221 [2024-12-06 19:26:50.088849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-12-06 19:26:50.098683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.221 [2024-12-06 19:26:50.098803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.221 [2024-12-06 19:26:50.098830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.221 [2024-12-06 19:26:50.098846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.221 [2024-12-06 19:26:50.098859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.221 [2024-12-06 19:26:50.098888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-12-06 19:26:50.108678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.221 [2024-12-06 19:26:50.108805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.221 [2024-12-06 19:26:50.108831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.221 [2024-12-06 19:26:50.108845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.221 [2024-12-06 19:26:50.108858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.221 [2024-12-06 19:26:50.108887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-12-06 19:26:50.118763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.221 [2024-12-06 19:26:50.118851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.221 [2024-12-06 19:26:50.118877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.221 [2024-12-06 19:26:50.118892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.221 [2024-12-06 19:26:50.118905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.221 [2024-12-06 19:26:50.118934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-12-06 19:26:50.128846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.221 [2024-12-06 19:26:50.128950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.221 [2024-12-06 19:26:50.128985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.221 [2024-12-06 19:26:50.129016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.221 [2024-12-06 19:26:50.129029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.221 [2024-12-06 19:26:50.129065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-12-06 19:26:50.138829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.221 [2024-12-06 19:26:50.138918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.221 [2024-12-06 19:26:50.138944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.221 [2024-12-06 19:26:50.138959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.221 [2024-12-06 19:26:50.138972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.221 [2024-12-06 19:26:50.139001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-12-06 19:26:50.148838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.221 [2024-12-06 19:26:50.148924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.221 [2024-12-06 19:26:50.148949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.221 [2024-12-06 19:26:50.148963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.221 [2024-12-06 19:26:50.148976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.221 [2024-12-06 19:26:50.149020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-12-06 19:26:50.158851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.221 [2024-12-06 19:26:50.158934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.221 [2024-12-06 19:26:50.158964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.221 [2024-12-06 19:26:50.158979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.221 [2024-12-06 19:26:50.158992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.221 [2024-12-06 19:26:50.159036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-12-06 19:26:50.168868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.221 [2024-12-06 19:26:50.168963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.221 [2024-12-06 19:26:50.168989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.221 [2024-12-06 19:26:50.169011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.221 [2024-12-06 19:26:50.169025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.221 [2024-12-06 19:26:50.169068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-12-06 19:26:50.178957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.221 [2024-12-06 19:26:50.179064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.221 [2024-12-06 19:26:50.179087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.221 [2024-12-06 19:26:50.179102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.221 [2024-12-06 19:26:50.179126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.221 [2024-12-06 19:26:50.179154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-12-06 19:26:50.188988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.221 [2024-12-06 19:26:50.189101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.221 [2024-12-06 19:26:50.189126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.221 [2024-12-06 19:26:50.189140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.221 [2024-12-06 19:26:50.189152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.221 [2024-12-06 19:26:50.189181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-12-06 19:26:50.199021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.221 [2024-12-06 19:26:50.199117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.221 [2024-12-06 19:26:50.199140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.221 [2024-12-06 19:26:50.199155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.221 [2024-12-06 19:26:50.199173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.221 [2024-12-06 19:26:50.199218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-12-06 19:26:50.209033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.221 [2024-12-06 19:26:50.209144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.221 [2024-12-06 19:26:50.209168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.221 [2024-12-06 19:26:50.209183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.221 [2024-12-06 19:26:50.209196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.221 [2024-12-06 19:26:50.209225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.221 qpair failed and we were unable to recover it. 00:28:05.221 [2024-12-06 19:26:50.219063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.221 [2024-12-06 19:26:50.219180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.221 [2024-12-06 19:26:50.219203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.221 [2024-12-06 19:26:50.219218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.222 [2024-12-06 19:26:50.219230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.222 [2024-12-06 19:26:50.219259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-12-06 19:26:50.229067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.222 [2024-12-06 19:26:50.229159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.222 [2024-12-06 19:26:50.229185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.222 [2024-12-06 19:26:50.229199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.222 [2024-12-06 19:26:50.229211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.222 [2024-12-06 19:26:50.229240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-12-06 19:26:50.239174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.222 [2024-12-06 19:26:50.239265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.222 [2024-12-06 19:26:50.239305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.222 [2024-12-06 19:26:50.239320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.222 [2024-12-06 19:26:50.239333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.222 [2024-12-06 19:26:50.239362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-12-06 19:26:50.249155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.222 [2024-12-06 19:26:50.249259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.222 [2024-12-06 19:26:50.249283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.222 [2024-12-06 19:26:50.249297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.222 [2024-12-06 19:26:50.249310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.222 [2024-12-06 19:26:50.249339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.222 [2024-12-06 19:26:50.259148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.222 [2024-12-06 19:26:50.259239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.222 [2024-12-06 19:26:50.259263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.222 [2024-12-06 19:26:50.259277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.222 [2024-12-06 19:26:50.259290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.222 [2024-12-06 19:26:50.259319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.222 qpair failed and we were unable to recover it. 00:28:05.481 [2024-12-06 19:26:50.269143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.481 [2024-12-06 19:26:50.269243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.481 [2024-12-06 19:26:50.269269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.481 [2024-12-06 19:26:50.269284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.481 [2024-12-06 19:26:50.269296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.481 [2024-12-06 19:26:50.269326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-12-06 19:26:50.279193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.481 [2024-12-06 19:26:50.279293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.481 [2024-12-06 19:26:50.279335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.481 [2024-12-06 19:26:50.279351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.481 [2024-12-06 19:26:50.279363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.481 [2024-12-06 19:26:50.279392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-12-06 19:26:50.289262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.481 [2024-12-06 19:26:50.289355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.481 [2024-12-06 19:26:50.289384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.481 [2024-12-06 19:26:50.289400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.481 [2024-12-06 19:26:50.289413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.481 [2024-12-06 19:26:50.289443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-12-06 19:26:50.299300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.481 [2024-12-06 19:26:50.299395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.481 [2024-12-06 19:26:50.299419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.481 [2024-12-06 19:26:50.299434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.481 [2024-12-06 19:26:50.299462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.481 [2024-12-06 19:26:50.299491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-12-06 19:26:50.309294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.481 [2024-12-06 19:26:50.309414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.481 [2024-12-06 19:26:50.309438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.481 [2024-12-06 19:26:50.309452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.481 [2024-12-06 19:26:50.309464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.481 [2024-12-06 19:26:50.309493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-12-06 19:26:50.319395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.481 [2024-12-06 19:26:50.319479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.481 [2024-12-06 19:26:50.319504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.481 [2024-12-06 19:26:50.319518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.481 [2024-12-06 19:26:50.319530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.481 [2024-12-06 19:26:50.319559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-12-06 19:26:50.329382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.481 [2024-12-06 19:26:50.329473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.481 [2024-12-06 19:26:50.329497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.481 [2024-12-06 19:26:50.329511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.481 [2024-12-06 19:26:50.329528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.481 [2024-12-06 19:26:50.329558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-12-06 19:26:50.339471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.481 [2024-12-06 19:26:50.339598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.481 [2024-12-06 19:26:50.339639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.481 [2024-12-06 19:26:50.339654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.481 [2024-12-06 19:26:50.339667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.481 [2024-12-06 19:26:50.339696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-12-06 19:26:50.349366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.481 [2024-12-06 19:26:50.349469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.481 [2024-12-06 19:26:50.349494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.481 [2024-12-06 19:26:50.349509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.481 [2024-12-06 19:26:50.349521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.481 [2024-12-06 19:26:50.349551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-12-06 19:26:50.359447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.481 [2024-12-06 19:26:50.359538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.481 [2024-12-06 19:26:50.359562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.481 [2024-12-06 19:26:50.359577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.481 [2024-12-06 19:26:50.359605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.481 [2024-12-06 19:26:50.359635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-12-06 19:26:50.369531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.481 [2024-12-06 19:26:50.369640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.481 [2024-12-06 19:26:50.369663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.481 [2024-12-06 19:26:50.369678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.481 [2024-12-06 19:26:50.369691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.481 [2024-12-06 19:26:50.369744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-12-06 19:26:50.379516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.481 [2024-12-06 19:26:50.379605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.481 [2024-12-06 19:26:50.379629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.481 [2024-12-06 19:26:50.379644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.481 [2024-12-06 19:26:50.379657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.481 [2024-12-06 19:26:50.379685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.481 qpair failed and we were unable to recover it. 00:28:05.481 [2024-12-06 19:26:50.389564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.481 [2024-12-06 19:26:50.389686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.482 [2024-12-06 19:26:50.389735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.482 [2024-12-06 19:26:50.389754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.482 [2024-12-06 19:26:50.389767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.482 [2024-12-06 19:26:50.389797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-12-06 19:26:50.399526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.482 [2024-12-06 19:26:50.399607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.482 [2024-12-06 19:26:50.399631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.482 [2024-12-06 19:26:50.399645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.482 [2024-12-06 19:26:50.399658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.482 [2024-12-06 19:26:50.399686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-12-06 19:26:50.409583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.482 [2024-12-06 19:26:50.409672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.482 [2024-12-06 19:26:50.409695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.482 [2024-12-06 19:26:50.409734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.482 [2024-12-06 19:26:50.409748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.482 [2024-12-06 19:26:50.409778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-12-06 19:26:50.419595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.482 [2024-12-06 19:26:50.419704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.482 [2024-12-06 19:26:50.419742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.482 [2024-12-06 19:26:50.419758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.482 [2024-12-06 19:26:50.419772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.482 [2024-12-06 19:26:50.419802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-12-06 19:26:50.429678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.482 [2024-12-06 19:26:50.429787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.482 [2024-12-06 19:26:50.429813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.482 [2024-12-06 19:26:50.429829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.482 [2024-12-06 19:26:50.429842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.482 [2024-12-06 19:26:50.429872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-12-06 19:26:50.439678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.482 [2024-12-06 19:26:50.439770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.482 [2024-12-06 19:26:50.439795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.482 [2024-12-06 19:26:50.439810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.482 [2024-12-06 19:26:50.439824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.482 [2024-12-06 19:26:50.439853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-12-06 19:26:50.449752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.482 [2024-12-06 19:26:50.449849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.482 [2024-12-06 19:26:50.449873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.482 [2024-12-06 19:26:50.449888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.482 [2024-12-06 19:26:50.449901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.482 [2024-12-06 19:26:50.449931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-12-06 19:26:50.459819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.482 [2024-12-06 19:26:50.459911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.482 [2024-12-06 19:26:50.459936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.482 [2024-12-06 19:26:50.459956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.482 [2024-12-06 19:26:50.459970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.482 [2024-12-06 19:26:50.460015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-12-06 19:26:50.469788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.482 [2024-12-06 19:26:50.469899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.482 [2024-12-06 19:26:50.469924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.482 [2024-12-06 19:26:50.469940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.482 [2024-12-06 19:26:50.469952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.482 [2024-12-06 19:26:50.469982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-12-06 19:26:50.479824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.482 [2024-12-06 19:26:50.479936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.482 [2024-12-06 19:26:50.479960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.482 [2024-12-06 19:26:50.479975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.482 [2024-12-06 19:26:50.479989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.482 [2024-12-06 19:26:50.480019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-12-06 19:26:50.489864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.482 [2024-12-06 19:26:50.489957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.482 [2024-12-06 19:26:50.489981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.482 [2024-12-06 19:26:50.489996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.482 [2024-12-06 19:26:50.490009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.482 [2024-12-06 19:26:50.490038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-12-06 19:26:50.499866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.482 [2024-12-06 19:26:50.499955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.482 [2024-12-06 19:26:50.499979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.482 [2024-12-06 19:26:50.499994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.482 [2024-12-06 19:26:50.500007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.482 [2024-12-06 19:26:50.500051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-12-06 19:26:50.509963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.482 [2024-12-06 19:26:50.510068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.482 [2024-12-06 19:26:50.510092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.482 [2024-12-06 19:26:50.510106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.482 [2024-12-06 19:26:50.510118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.482 [2024-12-06 19:26:50.510147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.482 [2024-12-06 19:26:50.519974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.482 [2024-12-06 19:26:50.520061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.482 [2024-12-06 19:26:50.520101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.482 [2024-12-06 19:26:50.520115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.482 [2024-12-06 19:26:50.520127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.482 [2024-12-06 19:26:50.520156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.482 qpair failed and we were unable to recover it. 00:28:05.741 [2024-12-06 19:26:50.529954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.741 [2024-12-06 19:26:50.530080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.741 [2024-12-06 19:26:50.530104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.741 [2024-12-06 19:26:50.530118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.741 [2024-12-06 19:26:50.530130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.741 [2024-12-06 19:26:50.530170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.741 qpair failed and we were unable to recover it. 00:28:05.741 [2024-12-06 19:26:50.540086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.741 [2024-12-06 19:26:50.540177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.741 [2024-12-06 19:26:50.540200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.741 [2024-12-06 19:26:50.540215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.741 [2024-12-06 19:26:50.540228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.741 [2024-12-06 19:26:50.540257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.741 qpair failed and we were unable to recover it. 00:28:05.741 [2024-12-06 19:26:50.549989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.741 [2024-12-06 19:26:50.550077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.741 [2024-12-06 19:26:50.550106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.741 [2024-12-06 19:26:50.550122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.741 [2024-12-06 19:26:50.550135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.741 [2024-12-06 19:26:50.550164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.741 qpair failed and we were unable to recover it. 00:28:05.741 [2024-12-06 19:26:50.559999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.741 [2024-12-06 19:26:50.560093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.741 [2024-12-06 19:26:50.560118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.741 [2024-12-06 19:26:50.560133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.741 [2024-12-06 19:26:50.560145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.741 [2024-12-06 19:26:50.560174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.741 qpair failed and we were unable to recover it. 00:28:05.741 [2024-12-06 19:26:50.570085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.741 [2024-12-06 19:26:50.570226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.741 [2024-12-06 19:26:50.570251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.741 [2024-12-06 19:26:50.570266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.741 [2024-12-06 19:26:50.570279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.741 [2024-12-06 19:26:50.570308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.741 qpair failed and we were unable to recover it. 00:28:05.741 [2024-12-06 19:26:50.580182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.741 [2024-12-06 19:26:50.580282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.741 [2024-12-06 19:26:50.580307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.741 [2024-12-06 19:26:50.580322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.741 [2024-12-06 19:26:50.580335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.741 [2024-12-06 19:26:50.580364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.741 qpair failed and we were unable to recover it. 00:28:05.741 [2024-12-06 19:26:50.590126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.741 [2024-12-06 19:26:50.590222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.741 [2024-12-06 19:26:50.590246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.741 [2024-12-06 19:26:50.590266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.741 [2024-12-06 19:26:50.590279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.741 [2024-12-06 19:26:50.590308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.741 qpair failed and we were unable to recover it. 00:28:05.741 [2024-12-06 19:26:50.600231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.741 [2024-12-06 19:26:50.600328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.741 [2024-12-06 19:26:50.600351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.741 [2024-12-06 19:26:50.600365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.741 [2024-12-06 19:26:50.600378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.741 [2024-12-06 19:26:50.600406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.741 qpair failed and we were unable to recover it. 00:28:05.741 [2024-12-06 19:26:50.610227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.741 [2024-12-06 19:26:50.610340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.741 [2024-12-06 19:26:50.610362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.741 [2024-12-06 19:26:50.610376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.741 [2024-12-06 19:26:50.610389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.741 [2024-12-06 19:26:50.610417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.741 qpair failed and we were unable to recover it. 00:28:05.741 [2024-12-06 19:26:50.620220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.741 [2024-12-06 19:26:50.620302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.741 [2024-12-06 19:26:50.620326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.741 [2024-12-06 19:26:50.620340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.742 [2024-12-06 19:26:50.620352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.742 [2024-12-06 19:26:50.620381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.742 qpair failed and we were unable to recover it. 00:28:05.742 [2024-12-06 19:26:50.630234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.742 [2024-12-06 19:26:50.630317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.742 [2024-12-06 19:26:50.630341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.742 [2024-12-06 19:26:50.630356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.742 [2024-12-06 19:26:50.630369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.742 [2024-12-06 19:26:50.630397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.742 qpair failed and we were unable to recover it. 00:28:05.742 [2024-12-06 19:26:50.640281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.742 [2024-12-06 19:26:50.640365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.742 [2024-12-06 19:26:50.640389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.742 [2024-12-06 19:26:50.640403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.742 [2024-12-06 19:26:50.640415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.742 [2024-12-06 19:26:50.640444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.742 qpair failed and we were unable to recover it. 00:28:05.742 [2024-12-06 19:26:50.650322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.742 [2024-12-06 19:26:50.650457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.742 [2024-12-06 19:26:50.650483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.742 [2024-12-06 19:26:50.650499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.742 [2024-12-06 19:26:50.650511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.742 [2024-12-06 19:26:50.650540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.742 qpair failed and we were unable to recover it. 00:28:05.742 [2024-12-06 19:26:50.660351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.742 [2024-12-06 19:26:50.660441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.742 [2024-12-06 19:26:50.660464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.742 [2024-12-06 19:26:50.660479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.742 [2024-12-06 19:26:50.660491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.742 [2024-12-06 19:26:50.660520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.742 qpair failed and we were unable to recover it. 00:28:05.742 [2024-12-06 19:26:50.670442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.742 [2024-12-06 19:26:50.670527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.742 [2024-12-06 19:26:50.670550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.742 [2024-12-06 19:26:50.670565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.742 [2024-12-06 19:26:50.670577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.742 [2024-12-06 19:26:50.670605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.742 qpair failed and we were unable to recover it. 00:28:05.742 [2024-12-06 19:26:50.680406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.742 [2024-12-06 19:26:50.680484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.742 [2024-12-06 19:26:50.680512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.742 [2024-12-06 19:26:50.680527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.742 [2024-12-06 19:26:50.680539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.742 [2024-12-06 19:26:50.680567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.742 qpair failed and we were unable to recover it. 00:28:05.742 [2024-12-06 19:26:50.690479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.742 [2024-12-06 19:26:50.690567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.742 [2024-12-06 19:26:50.690591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.742 [2024-12-06 19:26:50.690605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.742 [2024-12-06 19:26:50.690617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.742 [2024-12-06 19:26:50.690646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.742 qpair failed and we were unable to recover it. 00:28:05.742 [2024-12-06 19:26:50.700454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.742 [2024-12-06 19:26:50.700539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.742 [2024-12-06 19:26:50.700563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.742 [2024-12-06 19:26:50.700577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.742 [2024-12-06 19:26:50.700589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.742 [2024-12-06 19:26:50.700618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.742 qpair failed and we were unable to recover it. 00:28:05.742 [2024-12-06 19:26:50.710445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.742 [2024-12-06 19:26:50.710532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.742 [2024-12-06 19:26:50.710555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.742 [2024-12-06 19:26:50.710569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.742 [2024-12-06 19:26:50.710583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.742 [2024-12-06 19:26:50.710612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.742 qpair failed and we were unable to recover it. 00:28:05.742 [2024-12-06 19:26:50.720550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.742 [2024-12-06 19:26:50.720636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.742 [2024-12-06 19:26:50.720660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.742 [2024-12-06 19:26:50.720683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.742 [2024-12-06 19:26:50.720698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.742 [2024-12-06 19:26:50.720753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.742 qpair failed and we were unable to recover it. 00:28:05.742 [2024-12-06 19:26:50.730536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.742 [2024-12-06 19:26:50.730634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.742 [2024-12-06 19:26:50.730657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.742 [2024-12-06 19:26:50.730671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.742 [2024-12-06 19:26:50.730683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.742 [2024-12-06 19:26:50.730747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.742 qpair failed and we were unable to recover it. 00:28:05.742 [2024-12-06 19:26:50.740562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.742 [2024-12-06 19:26:50.740656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.742 [2024-12-06 19:26:50.740679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.742 [2024-12-06 19:26:50.740692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.742 [2024-12-06 19:26:50.740731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.742 [2024-12-06 19:26:50.740764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.742 qpair failed and we were unable to recover it. 00:28:05.742 [2024-12-06 19:26:50.750644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.742 [2024-12-06 19:26:50.750769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.742 [2024-12-06 19:26:50.750793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.742 [2024-12-06 19:26:50.750809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.743 [2024-12-06 19:26:50.750822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.743 [2024-12-06 19:26:50.750851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.743 qpair failed and we were unable to recover it. 00:28:05.743 [2024-12-06 19:26:50.760690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.743 [2024-12-06 19:26:50.760806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.743 [2024-12-06 19:26:50.760833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.743 [2024-12-06 19:26:50.760848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.743 [2024-12-06 19:26:50.760860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.743 [2024-12-06 19:26:50.760890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.743 qpair failed and we were unable to recover it. 00:28:05.743 [2024-12-06 19:26:50.770719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.743 [2024-12-06 19:26:50.770845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.743 [2024-12-06 19:26:50.770870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.743 [2024-12-06 19:26:50.770886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.743 [2024-12-06 19:26:50.770899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.743 [2024-12-06 19:26:50.770928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.743 qpair failed and we were unable to recover it. 00:28:05.743 [2024-12-06 19:26:50.780683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.743 [2024-12-06 19:26:50.780841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.743 [2024-12-06 19:26:50.780868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.743 [2024-12-06 19:26:50.780882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.743 [2024-12-06 19:26:50.780895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:05.743 [2024-12-06 19:26:50.780925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:05.743 qpair failed and we were unable to recover it. 00:28:06.001 [2024-12-06 19:26:50.790761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.001 [2024-12-06 19:26:50.790853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.001 [2024-12-06 19:26:50.790878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.001 [2024-12-06 19:26:50.790893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.001 [2024-12-06 19:26:50.790906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.001 [2024-12-06 19:26:50.790936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.001 qpair failed and we were unable to recover it. 00:28:06.001 [2024-12-06 19:26:50.800739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.001 [2024-12-06 19:26:50.800828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.001 [2024-12-06 19:26:50.800854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.001 [2024-12-06 19:26:50.800868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.001 [2024-12-06 19:26:50.800881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.001 [2024-12-06 19:26:50.800910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.001 qpair failed and we were unable to recover it. 00:28:06.001 [2024-12-06 19:26:50.810788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.001 [2024-12-06 19:26:50.810896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.001 [2024-12-06 19:26:50.810921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.001 [2024-12-06 19:26:50.810939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.001 [2024-12-06 19:26:50.810952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.001 [2024-12-06 19:26:50.810981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.001 qpair failed and we were unable to recover it. 00:28:06.001 [2024-12-06 19:26:50.820884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.001 [2024-12-06 19:26:50.821020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.001 [2024-12-06 19:26:50.821062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.001 [2024-12-06 19:26:50.821077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.001 [2024-12-06 19:26:50.821090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.001 [2024-12-06 19:26:50.821120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.001 qpair failed and we were unable to recover it. 00:28:06.001 [2024-12-06 19:26:50.830832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.001 [2024-12-06 19:26:50.830919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.001 [2024-12-06 19:26:50.830943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.001 [2024-12-06 19:26:50.830958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.001 [2024-12-06 19:26:50.830972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.001 [2024-12-06 19:26:50.831014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.001 qpair failed and we were unable to recover it. 00:28:06.002 [2024-12-06 19:26:50.840846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.002 [2024-12-06 19:26:50.840934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.002 [2024-12-06 19:26:50.840959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.002 [2024-12-06 19:26:50.840974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.002 [2024-12-06 19:26:50.840987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.002 [2024-12-06 19:26:50.841030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.002 qpair failed and we were unable to recover it. 00:28:06.002 [2024-12-06 19:26:50.850916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.002 [2024-12-06 19:26:50.851025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.002 [2024-12-06 19:26:50.851048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.002 [2024-12-06 19:26:50.851068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.002 [2024-12-06 19:26:50.851081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.002 [2024-12-06 19:26:50.851118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.002 qpair failed and we were unable to recover it. 00:28:06.002 [2024-12-06 19:26:50.860934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.002 [2024-12-06 19:26:50.861038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.002 [2024-12-06 19:26:50.861063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.002 [2024-12-06 19:26:50.861078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.002 [2024-12-06 19:26:50.861091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.002 [2024-12-06 19:26:50.861119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.002 qpair failed and we were unable to recover it. 00:28:06.002 [2024-12-06 19:26:50.870956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.002 [2024-12-06 19:26:50.871059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.002 [2024-12-06 19:26:50.871084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.002 [2024-12-06 19:26:50.871098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.002 [2024-12-06 19:26:50.871111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.002 [2024-12-06 19:26:50.871139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.002 qpair failed and we were unable to recover it. 00:28:06.002 [2024-12-06 19:26:50.880951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.002 [2024-12-06 19:26:50.881093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.002 [2024-12-06 19:26:50.881117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.002 [2024-12-06 19:26:50.881131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.002 [2024-12-06 19:26:50.881144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.002 [2024-12-06 19:26:50.881173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.002 qpair failed and we were unable to recover it. 00:28:06.002 [2024-12-06 19:26:50.891050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.002 [2024-12-06 19:26:50.891143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.002 [2024-12-06 19:26:50.891166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.002 [2024-12-06 19:26:50.891181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.002 [2024-12-06 19:26:50.891193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.002 [2024-12-06 19:26:50.891222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.002 qpair failed and we were unable to recover it. 00:28:06.002 [2024-12-06 19:26:50.901024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.002 [2024-12-06 19:26:50.901117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.002 [2024-12-06 19:26:50.901141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.002 [2024-12-06 19:26:50.901155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.002 [2024-12-06 19:26:50.901168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.002 [2024-12-06 19:26:50.901197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.002 qpair failed and we were unable to recover it. 00:28:06.002 [2024-12-06 19:26:50.911090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.002 [2024-12-06 19:26:50.911182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.002 [2024-12-06 19:26:50.911206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.002 [2024-12-06 19:26:50.911220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.002 [2024-12-06 19:26:50.911232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.002 [2024-12-06 19:26:50.911261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.002 qpair failed and we were unable to recover it. 00:28:06.002 [2024-12-06 19:26:50.921108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.002 [2024-12-06 19:26:50.921192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.002 [2024-12-06 19:26:50.921216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.002 [2024-12-06 19:26:50.921230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.002 [2024-12-06 19:26:50.921242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.002 [2024-12-06 19:26:50.921270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.002 qpair failed and we were unable to recover it. 00:28:06.002 [2024-12-06 19:26:50.931117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.002 [2024-12-06 19:26:50.931236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.002 [2024-12-06 19:26:50.931268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.002 [2024-12-06 19:26:50.931283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.002 [2024-12-06 19:26:50.931296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.002 [2024-12-06 19:26:50.931325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.002 qpair failed and we were unable to recover it. 00:28:06.002 [2024-12-06 19:26:50.941101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.002 [2024-12-06 19:26:50.941194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.002 [2024-12-06 19:26:50.941218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.002 [2024-12-06 19:26:50.941232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.002 [2024-12-06 19:26:50.941245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.002 [2024-12-06 19:26:50.941274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.002 qpair failed and we were unable to recover it. 00:28:06.002 [2024-12-06 19:26:50.951144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.002 [2024-12-06 19:26:50.951223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.002 [2024-12-06 19:26:50.951246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.002 [2024-12-06 19:26:50.951261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.002 [2024-12-06 19:26:50.951273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.002 [2024-12-06 19:26:50.951302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.002 qpair failed and we were unable to recover it. 00:28:06.002 [2024-12-06 19:26:50.961156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.002 [2024-12-06 19:26:50.961253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.002 [2024-12-06 19:26:50.961276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.002 [2024-12-06 19:26:50.961291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.002 [2024-12-06 19:26:50.961304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.002 [2024-12-06 19:26:50.961333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.002 qpair failed and we were unable to recover it. 00:28:06.002 [2024-12-06 19:26:50.971257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.003 [2024-12-06 19:26:50.971349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.003 [2024-12-06 19:26:50.971373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.003 [2024-12-06 19:26:50.971388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.003 [2024-12-06 19:26:50.971400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.003 [2024-12-06 19:26:50.971429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.003 qpair failed and we were unable to recover it. 00:28:06.003 [2024-12-06 19:26:50.981224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.003 [2024-12-06 19:26:50.981319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.003 [2024-12-06 19:26:50.981343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.003 [2024-12-06 19:26:50.981375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.003 [2024-12-06 19:26:50.981390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.003 [2024-12-06 19:26:50.981418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.003 qpair failed and we were unable to recover it. 00:28:06.003 [2024-12-06 19:26:50.991333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.003 [2024-12-06 19:26:50.991444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.003 [2024-12-06 19:26:50.991469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.003 [2024-12-06 19:26:50.991483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.003 [2024-12-06 19:26:50.991495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.003 [2024-12-06 19:26:50.991524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.003 qpair failed and we were unable to recover it. 00:28:06.003 [2024-12-06 19:26:51.001292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.003 [2024-12-06 19:26:51.001374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.003 [2024-12-06 19:26:51.001398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.003 [2024-12-06 19:26:51.001412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.003 [2024-12-06 19:26:51.001425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.003 [2024-12-06 19:26:51.001453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.003 qpair failed and we were unable to recover it. 00:28:06.003 [2024-12-06 19:26:51.011370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.003 [2024-12-06 19:26:51.011474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.003 [2024-12-06 19:26:51.011497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.003 [2024-12-06 19:26:51.011512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.003 [2024-12-06 19:26:51.011524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.003 [2024-12-06 19:26:51.011553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.003 qpair failed and we were unable to recover it. 00:28:06.003 [2024-12-06 19:26:51.021365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.003 [2024-12-06 19:26:51.021473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.003 [2024-12-06 19:26:51.021499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.003 [2024-12-06 19:26:51.021514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.003 [2024-12-06 19:26:51.021526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.003 [2024-12-06 19:26:51.021554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.003 qpair failed and we were unable to recover it. 00:28:06.003 [2024-12-06 19:26:51.031375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.003 [2024-12-06 19:26:51.031457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.003 [2024-12-06 19:26:51.031481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.003 [2024-12-06 19:26:51.031495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.003 [2024-12-06 19:26:51.031508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.003 [2024-12-06 19:26:51.031537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.003 qpair failed and we were unable to recover it. 00:28:06.003 [2024-12-06 19:26:51.041398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.003 [2024-12-06 19:26:51.041483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.003 [2024-12-06 19:26:51.041507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.003 [2024-12-06 19:26:51.041521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.003 [2024-12-06 19:26:51.041533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.003 [2024-12-06 19:26:51.041562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.003 qpair failed and we were unable to recover it. 00:28:06.261 [2024-12-06 19:26:51.051478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.261 [2024-12-06 19:26:51.051569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.261 [2024-12-06 19:26:51.051592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.261 [2024-12-06 19:26:51.051607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.261 [2024-12-06 19:26:51.051620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.261 [2024-12-06 19:26:51.051648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.261 qpair failed and we were unable to recover it. 00:28:06.262 [2024-12-06 19:26:51.061486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.262 [2024-12-06 19:26:51.061612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.262 [2024-12-06 19:26:51.061636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.262 [2024-12-06 19:26:51.061651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.262 [2024-12-06 19:26:51.061664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.262 [2024-12-06 19:26:51.061708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.262 qpair failed and we were unable to recover it. 00:28:06.262 [2024-12-06 19:26:51.071493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.262 [2024-12-06 19:26:51.071576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.262 [2024-12-06 19:26:51.071600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.262 [2024-12-06 19:26:51.071614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.262 [2024-12-06 19:26:51.071627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.262 [2024-12-06 19:26:51.071656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.262 qpair failed and we were unable to recover it. 00:28:06.262 [2024-12-06 19:26:51.081518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.262 [2024-12-06 19:26:51.081606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.262 [2024-12-06 19:26:51.081629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.262 [2024-12-06 19:26:51.081643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.262 [2024-12-06 19:26:51.081656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.262 [2024-12-06 19:26:51.081684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.262 qpair failed and we were unable to recover it. 00:28:06.262 [2024-12-06 19:26:51.091593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.262 [2024-12-06 19:26:51.091687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.262 [2024-12-06 19:26:51.091733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.262 [2024-12-06 19:26:51.091750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.262 [2024-12-06 19:26:51.091763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.262 [2024-12-06 19:26:51.091793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.262 qpair failed and we were unable to recover it. 00:28:06.262 [2024-12-06 19:26:51.101626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.262 [2024-12-06 19:26:51.101801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.262 [2024-12-06 19:26:51.101826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.262 [2024-12-06 19:26:51.101841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.262 [2024-12-06 19:26:51.101854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.262 [2024-12-06 19:26:51.101884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.262 qpair failed and we were unable to recover it. 00:28:06.262 [2024-12-06 19:26:51.111636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.262 [2024-12-06 19:26:51.111743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.262 [2024-12-06 19:26:51.111768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.262 [2024-12-06 19:26:51.111788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.262 [2024-12-06 19:26:51.111801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.262 [2024-12-06 19:26:51.111830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.262 qpair failed and we were unable to recover it. 00:28:06.262 [2024-12-06 19:26:51.121719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.262 [2024-12-06 19:26:51.121839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.262 [2024-12-06 19:26:51.121865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.262 [2024-12-06 19:26:51.121881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.262 [2024-12-06 19:26:51.121893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.262 [2024-12-06 19:26:51.121923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.262 qpair failed and we were unable to recover it. 00:28:06.262 [2024-12-06 19:26:51.131731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.262 [2024-12-06 19:26:51.131827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.262 [2024-12-06 19:26:51.131851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.262 [2024-12-06 19:26:51.131866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.262 [2024-12-06 19:26:51.131879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.262 [2024-12-06 19:26:51.131909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.262 qpair failed and we were unable to recover it. 00:28:06.262 [2024-12-06 19:26:51.141737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.262 [2024-12-06 19:26:51.141832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.262 [2024-12-06 19:26:51.141857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.262 [2024-12-06 19:26:51.141872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.262 [2024-12-06 19:26:51.141885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.262 [2024-12-06 19:26:51.141915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.262 qpair failed and we were unable to recover it. 00:28:06.262 [2024-12-06 19:26:51.151758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.262 [2024-12-06 19:26:51.151857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.262 [2024-12-06 19:26:51.151881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.262 [2024-12-06 19:26:51.151896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.262 [2024-12-06 19:26:51.151909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.262 [2024-12-06 19:26:51.151943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.262 qpair failed and we were unable to recover it. 00:28:06.262 [2024-12-06 19:26:51.161760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.262 [2024-12-06 19:26:51.161858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.262 [2024-12-06 19:26:51.161882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.262 [2024-12-06 19:26:51.161898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.262 [2024-12-06 19:26:51.161911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.262 [2024-12-06 19:26:51.161941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.262 qpair failed and we were unable to recover it. 00:28:06.262 [2024-12-06 19:26:51.171859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.262 [2024-12-06 19:26:51.171972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.262 [2024-12-06 19:26:51.171996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.262 [2024-12-06 19:26:51.172028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.262 [2024-12-06 19:26:51.172040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.262 [2024-12-06 19:26:51.172069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.262 qpair failed and we were unable to recover it. 00:28:06.262 [2024-12-06 19:26:51.181831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.262 [2024-12-06 19:26:51.181921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.262 [2024-12-06 19:26:51.181945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.262 [2024-12-06 19:26:51.181960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.262 [2024-12-06 19:26:51.181973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.262 [2024-12-06 19:26:51.182018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.263 qpair failed and we were unable to recover it. 00:28:06.263 [2024-12-06 19:26:51.191856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.263 [2024-12-06 19:26:51.191945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.263 [2024-12-06 19:26:51.191969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.263 [2024-12-06 19:26:51.191984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.263 [2024-12-06 19:26:51.191996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.263 [2024-12-06 19:26:51.192040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.263 qpair failed and we were unable to recover it. 00:28:06.263 [2024-12-06 19:26:51.201913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.263 [2024-12-06 19:26:51.202048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.263 [2024-12-06 19:26:51.202074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.263 [2024-12-06 19:26:51.202089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.263 [2024-12-06 19:26:51.202101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.263 [2024-12-06 19:26:51.202129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.263 qpair failed and we were unable to recover it. 00:28:06.263 [2024-12-06 19:26:51.211970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.263 [2024-12-06 19:26:51.212081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.263 [2024-12-06 19:26:51.212106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.263 [2024-12-06 19:26:51.212121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.263 [2024-12-06 19:26:51.212134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.263 [2024-12-06 19:26:51.212162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.263 qpair failed and we were unable to recover it. 00:28:06.263 [2024-12-06 19:26:51.222013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.263 [2024-12-06 19:26:51.222099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.263 [2024-12-06 19:26:51.222123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.263 [2024-12-06 19:26:51.222139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.263 [2024-12-06 19:26:51.222152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.263 [2024-12-06 19:26:51.222181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.263 qpair failed and we were unable to recover it. 00:28:06.263 [2024-12-06 19:26:51.232046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.263 [2024-12-06 19:26:51.232156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.263 [2024-12-06 19:26:51.232184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.263 [2024-12-06 19:26:51.232200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.263 [2024-12-06 19:26:51.232212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.263 [2024-12-06 19:26:51.232242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.263 qpair failed and we were unable to recover it. 00:28:06.263 [2024-12-06 19:26:51.242044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.263 [2024-12-06 19:26:51.242182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.263 [2024-12-06 19:26:51.242208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.263 [2024-12-06 19:26:51.242228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.263 [2024-12-06 19:26:51.242242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.263 [2024-12-06 19:26:51.242272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.263 qpair failed and we were unable to recover it. 00:28:06.263 [2024-12-06 19:26:51.252117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.263 [2024-12-06 19:26:51.252253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.263 [2024-12-06 19:26:51.252277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.263 [2024-12-06 19:26:51.252292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.263 [2024-12-06 19:26:51.252304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.263 [2024-12-06 19:26:51.252334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.263 qpair failed and we were unable to recover it. 00:28:06.263 [2024-12-06 19:26:51.262104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.263 [2024-12-06 19:26:51.262222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.263 [2024-12-06 19:26:51.262247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.263 [2024-12-06 19:26:51.262261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.263 [2024-12-06 19:26:51.262274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.263 [2024-12-06 19:26:51.262303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.263 qpair failed and we were unable to recover it. 00:28:06.263 [2024-12-06 19:26:51.272107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.263 [2024-12-06 19:26:51.272206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.263 [2024-12-06 19:26:51.272230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.263 [2024-12-06 19:26:51.272245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.263 [2024-12-06 19:26:51.272257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.263 [2024-12-06 19:26:51.272285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.263 qpair failed and we were unable to recover it. 00:28:06.263 [2024-12-06 19:26:51.282151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.263 [2024-12-06 19:26:51.282229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.263 [2024-12-06 19:26:51.282252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.263 [2024-12-06 19:26:51.282266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.263 [2024-12-06 19:26:51.282278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.263 [2024-12-06 19:26:51.282311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.263 qpair failed and we were unable to recover it. 00:28:06.263 [2024-12-06 19:26:51.292149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.263 [2024-12-06 19:26:51.292240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.263 [2024-12-06 19:26:51.292265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.263 [2024-12-06 19:26:51.292280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.263 [2024-12-06 19:26:51.292293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.263 [2024-12-06 19:26:51.292321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.263 qpair failed and we were unable to recover it. 00:28:06.263 [2024-12-06 19:26:51.302195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.263 [2024-12-06 19:26:51.302281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.263 [2024-12-06 19:26:51.302305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.263 [2024-12-06 19:26:51.302320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.263 [2024-12-06 19:26:51.302332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.263 [2024-12-06 19:26:51.302361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.263 qpair failed and we were unable to recover it. 00:28:06.522 [2024-12-06 19:26:51.312194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.522 [2024-12-06 19:26:51.312280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.522 [2024-12-06 19:26:51.312303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.522 [2024-12-06 19:26:51.312317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.522 [2024-12-06 19:26:51.312330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.522 [2024-12-06 19:26:51.312358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.522 qpair failed and we were unable to recover it. 00:28:06.522 [2024-12-06 19:26:51.322344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.522 [2024-12-06 19:26:51.322467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.522 [2024-12-06 19:26:51.322493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.522 [2024-12-06 19:26:51.322508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.522 [2024-12-06 19:26:51.322529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.522 [2024-12-06 19:26:51.322558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.522 qpair failed and we were unable to recover it. 00:28:06.522 [2024-12-06 19:26:51.332330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.522 [2024-12-06 19:26:51.332426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.522 [2024-12-06 19:26:51.332451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.522 [2024-12-06 19:26:51.332482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.523 [2024-12-06 19:26:51.332494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.523 [2024-12-06 19:26:51.332524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.523 qpair failed and we were unable to recover it. 00:28:06.523 [2024-12-06 19:26:51.342293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.523 [2024-12-06 19:26:51.342397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.523 [2024-12-06 19:26:51.342422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.523 [2024-12-06 19:26:51.342437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.523 [2024-12-06 19:26:51.342449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.523 [2024-12-06 19:26:51.342478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.523 qpair failed and we were unable to recover it. 00:28:06.523 [2024-12-06 19:26:51.352413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.523 [2024-12-06 19:26:51.352499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.523 [2024-12-06 19:26:51.352523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.523 [2024-12-06 19:26:51.352537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.523 [2024-12-06 19:26:51.352550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.523 [2024-12-06 19:26:51.352578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.523 qpair failed and we were unable to recover it. 00:28:06.523 [2024-12-06 19:26:51.362372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.523 [2024-12-06 19:26:51.362450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.523 [2024-12-06 19:26:51.362476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.523 [2024-12-06 19:26:51.362491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.523 [2024-12-06 19:26:51.362503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.523 [2024-12-06 19:26:51.362532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.523 qpair failed and we were unable to recover it. 00:28:06.523 [2024-12-06 19:26:51.372379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.523 [2024-12-06 19:26:51.372473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.523 [2024-12-06 19:26:51.372498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.523 [2024-12-06 19:26:51.372518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.523 [2024-12-06 19:26:51.372530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.523 [2024-12-06 19:26:51.372559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.523 qpair failed and we were unable to recover it. 00:28:06.523 [2024-12-06 19:26:51.382393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.523 [2024-12-06 19:26:51.382485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.523 [2024-12-06 19:26:51.382511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.523 [2024-12-06 19:26:51.382527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.523 [2024-12-06 19:26:51.382540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.523 [2024-12-06 19:26:51.382569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.523 qpair failed and we were unable to recover it. 00:28:06.523 [2024-12-06 19:26:51.392435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.523 [2024-12-06 19:26:51.392530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.523 [2024-12-06 19:26:51.392555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.523 [2024-12-06 19:26:51.392569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.523 [2024-12-06 19:26:51.392581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.523 [2024-12-06 19:26:51.392609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.523 qpair failed and we were unable to recover it. 00:28:06.523 [2024-12-06 19:26:51.402512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.523 [2024-12-06 19:26:51.402637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.523 [2024-12-06 19:26:51.402662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.523 [2024-12-06 19:26:51.402677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.523 [2024-12-06 19:26:51.402689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.523 [2024-12-06 19:26:51.402740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.523 qpair failed and we were unable to recover it. 00:28:06.523 [2024-12-06 19:26:51.412468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.523 [2024-12-06 19:26:51.412571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.523 [2024-12-06 19:26:51.412595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.523 [2024-12-06 19:26:51.412609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.523 [2024-12-06 19:26:51.412622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.523 [2024-12-06 19:26:51.412656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.523 qpair failed and we were unable to recover it. 00:28:06.523 [2024-12-06 19:26:51.422474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.523 [2024-12-06 19:26:51.422584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.523 [2024-12-06 19:26:51.422611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.523 [2024-12-06 19:26:51.422626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.523 [2024-12-06 19:26:51.422638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.523 [2024-12-06 19:26:51.422668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.523 qpair failed and we were unable to recover it. 00:28:06.523 [2024-12-06 19:26:51.432519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.523 [2024-12-06 19:26:51.432606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.523 [2024-12-06 19:26:51.432631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.523 [2024-12-06 19:26:51.432646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.523 [2024-12-06 19:26:51.432658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.523 [2024-12-06 19:26:51.432686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.523 qpair failed and we were unable to recover it. 00:28:06.523 [2024-12-06 19:26:51.442565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.523 [2024-12-06 19:26:51.442653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.523 [2024-12-06 19:26:51.442676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.523 [2024-12-06 19:26:51.442690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.523 [2024-12-06 19:26:51.442702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.523 [2024-12-06 19:26:51.442756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.523 qpair failed and we were unable to recover it. 00:28:06.523 [2024-12-06 19:26:51.452582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.523 [2024-12-06 19:26:51.452672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.523 [2024-12-06 19:26:51.452695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.523 [2024-12-06 19:26:51.452710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.523 [2024-12-06 19:26:51.452745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.523 [2024-12-06 19:26:51.452777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.523 qpair failed and we were unable to recover it. 00:28:06.523 [2024-12-06 19:26:51.462644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.523 [2024-12-06 19:26:51.462771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.523 [2024-12-06 19:26:51.462795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.523 [2024-12-06 19:26:51.462811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.523 [2024-12-06 19:26:51.462824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.524 [2024-12-06 19:26:51.462853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.524 qpair failed and we were unable to recover it. 00:28:06.524 [2024-12-06 19:26:51.472634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.524 [2024-12-06 19:26:51.472752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.524 [2024-12-06 19:26:51.472778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.524 [2024-12-06 19:26:51.472793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.524 [2024-12-06 19:26:51.472807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.524 [2024-12-06 19:26:51.472836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.524 qpair failed and we were unable to recover it. 00:28:06.524 [2024-12-06 19:26:51.482679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.524 [2024-12-06 19:26:51.482789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.524 [2024-12-06 19:26:51.482815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.524 [2024-12-06 19:26:51.482831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.524 [2024-12-06 19:26:51.482843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.524 [2024-12-06 19:26:51.482873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.524 qpair failed and we were unable to recover it. 00:28:06.524 [2024-12-06 19:26:51.492813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.524 [2024-12-06 19:26:51.492924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.524 [2024-12-06 19:26:51.492950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.524 [2024-12-06 19:26:51.492965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.524 [2024-12-06 19:26:51.492978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.524 [2024-12-06 19:26:51.493022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.524 qpair failed and we were unable to recover it. 00:28:06.524 [2024-12-06 19:26:51.502773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.524 [2024-12-06 19:26:51.502876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.524 [2024-12-06 19:26:51.502901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.524 [2024-12-06 19:26:51.502921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.524 [2024-12-06 19:26:51.502935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.524 [2024-12-06 19:26:51.502965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.524 qpair failed and we were unable to recover it. 00:28:06.524 [2024-12-06 19:26:51.512782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.524 [2024-12-06 19:26:51.512886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.524 [2024-12-06 19:26:51.512912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.524 [2024-12-06 19:26:51.512927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.524 [2024-12-06 19:26:51.512939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.524 [2024-12-06 19:26:51.512969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.524 qpair failed and we were unable to recover it. 00:28:06.524 [2024-12-06 19:26:51.522793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.524 [2024-12-06 19:26:51.522878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.524 [2024-12-06 19:26:51.522902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.524 [2024-12-06 19:26:51.522917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.524 [2024-12-06 19:26:51.522930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.524 [2024-12-06 19:26:51.522958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.524 qpair failed and we were unable to recover it. 00:28:06.524 [2024-12-06 19:26:51.532842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.524 [2024-12-06 19:26:51.532936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.524 [2024-12-06 19:26:51.532961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.524 [2024-12-06 19:26:51.532976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.524 [2024-12-06 19:26:51.532989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.524 [2024-12-06 19:26:51.533033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.524 qpair failed and we were unable to recover it. 00:28:06.524 [2024-12-06 19:26:51.542855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.524 [2024-12-06 19:26:51.542951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.524 [2024-12-06 19:26:51.542975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.524 [2024-12-06 19:26:51.542990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.524 [2024-12-06 19:26:51.543002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.524 [2024-12-06 19:26:51.543051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.524 qpair failed and we were unable to recover it. 00:28:06.524 [2024-12-06 19:26:51.552904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.524 [2024-12-06 19:26:51.552989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.524 [2024-12-06 19:26:51.553028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.524 [2024-12-06 19:26:51.553043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.524 [2024-12-06 19:26:51.553055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.524 [2024-12-06 19:26:51.553083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.524 qpair failed and we were unable to recover it. 00:28:06.524 [2024-12-06 19:26:51.563041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.524 [2024-12-06 19:26:51.563163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.524 [2024-12-06 19:26:51.563204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.524 [2024-12-06 19:26:51.563219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.524 [2024-12-06 19:26:51.563232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.524 [2024-12-06 19:26:51.563262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.524 qpair failed and we were unable to recover it. 00:28:06.784 [2024-12-06 19:26:51.572968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.784 [2024-12-06 19:26:51.573078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.784 [2024-12-06 19:26:51.573103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.784 [2024-12-06 19:26:51.573118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.784 [2024-12-06 19:26:51.573130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.784 [2024-12-06 19:26:51.573159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-12-06 19:26:51.583003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.784 [2024-12-06 19:26:51.583104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.784 [2024-12-06 19:26:51.583128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.784 [2024-12-06 19:26:51.583142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.784 [2024-12-06 19:26:51.583154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.784 [2024-12-06 19:26:51.583182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-12-06 19:26:51.593035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.784 [2024-12-06 19:26:51.593123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.784 [2024-12-06 19:26:51.593147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.784 [2024-12-06 19:26:51.593161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.784 [2024-12-06 19:26:51.593174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.784 [2024-12-06 19:26:51.593202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-12-06 19:26:51.603094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.784 [2024-12-06 19:26:51.603184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.784 [2024-12-06 19:26:51.603207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.784 [2024-12-06 19:26:51.603221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.784 [2024-12-06 19:26:51.603233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.784 [2024-12-06 19:26:51.603261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-12-06 19:26:51.613110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.784 [2024-12-06 19:26:51.613199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.784 [2024-12-06 19:26:51.613223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.784 [2024-12-06 19:26:51.613238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.784 [2024-12-06 19:26:51.613250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.784 [2024-12-06 19:26:51.613278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-12-06 19:26:51.623102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.784 [2024-12-06 19:26:51.623200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.784 [2024-12-06 19:26:51.623223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.784 [2024-12-06 19:26:51.623237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.784 [2024-12-06 19:26:51.623250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.784 [2024-12-06 19:26:51.623279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-12-06 19:26:51.633146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.784 [2024-12-06 19:26:51.633231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.784 [2024-12-06 19:26:51.633256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.784 [2024-12-06 19:26:51.633276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.784 [2024-12-06 19:26:51.633290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.784 [2024-12-06 19:26:51.633318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-12-06 19:26:51.643158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.784 [2024-12-06 19:26:51.643241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.784 [2024-12-06 19:26:51.643266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.784 [2024-12-06 19:26:51.643281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.784 [2024-12-06 19:26:51.643293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.784 [2024-12-06 19:26:51.643322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-12-06 19:26:51.653182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.784 [2024-12-06 19:26:51.653281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.784 [2024-12-06 19:26:51.653305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.784 [2024-12-06 19:26:51.653320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.784 [2024-12-06 19:26:51.653332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.784 [2024-12-06 19:26:51.653361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-12-06 19:26:51.663214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.784 [2024-12-06 19:26:51.663298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.784 [2024-12-06 19:26:51.663322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.784 [2024-12-06 19:26:51.663337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.784 [2024-12-06 19:26:51.663349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.784 [2024-12-06 19:26:51.663377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.784 qpair failed and we were unable to recover it. 00:28:06.784 [2024-12-06 19:26:51.673328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.784 [2024-12-06 19:26:51.673419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.784 [2024-12-06 19:26:51.673443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.785 [2024-12-06 19:26:51.673457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.785 [2024-12-06 19:26:51.673469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.785 [2024-12-06 19:26:51.673505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-12-06 19:26:51.683270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.785 [2024-12-06 19:26:51.683354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.785 [2024-12-06 19:26:51.683378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.785 [2024-12-06 19:26:51.683392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.785 [2024-12-06 19:26:51.683405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.785 [2024-12-06 19:26:51.683433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-12-06 19:26:51.693287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.785 [2024-12-06 19:26:51.693374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.785 [2024-12-06 19:26:51.693397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.785 [2024-12-06 19:26:51.693412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.785 [2024-12-06 19:26:51.693425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.785 [2024-12-06 19:26:51.693453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-12-06 19:26:51.703318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.785 [2024-12-06 19:26:51.703435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.785 [2024-12-06 19:26:51.703465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.785 [2024-12-06 19:26:51.703480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.785 [2024-12-06 19:26:51.703493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.785 [2024-12-06 19:26:51.703525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-12-06 19:26:51.713352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.785 [2024-12-06 19:26:51.713434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.785 [2024-12-06 19:26:51.713458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.785 [2024-12-06 19:26:51.713472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.785 [2024-12-06 19:26:51.713484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.785 [2024-12-06 19:26:51.713513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-12-06 19:26:51.723376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.785 [2024-12-06 19:26:51.723464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.785 [2024-12-06 19:26:51.723489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.785 [2024-12-06 19:26:51.723504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.785 [2024-12-06 19:26:51.723516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.785 [2024-12-06 19:26:51.723545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-12-06 19:26:51.733419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.785 [2024-12-06 19:26:51.733511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.785 [2024-12-06 19:26:51.733534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.785 [2024-12-06 19:26:51.733548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.785 [2024-12-06 19:26:51.733560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.785 [2024-12-06 19:26:51.733589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-12-06 19:26:51.743499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.785 [2024-12-06 19:26:51.743582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.785 [2024-12-06 19:26:51.743606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.785 [2024-12-06 19:26:51.743621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.785 [2024-12-06 19:26:51.743634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.785 [2024-12-06 19:26:51.743663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-12-06 19:26:51.753435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.785 [2024-12-06 19:26:51.753519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.785 [2024-12-06 19:26:51.753543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.785 [2024-12-06 19:26:51.753557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.785 [2024-12-06 19:26:51.753570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.785 [2024-12-06 19:26:51.753598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-12-06 19:26:51.763470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.785 [2024-12-06 19:26:51.763551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.785 [2024-12-06 19:26:51.763577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.785 [2024-12-06 19:26:51.763597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.785 [2024-12-06 19:26:51.763610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.785 [2024-12-06 19:26:51.763638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-12-06 19:26:51.773557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.785 [2024-12-06 19:26:51.773716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.785 [2024-12-06 19:26:51.773752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.785 [2024-12-06 19:26:51.773767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.785 [2024-12-06 19:26:51.773780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.785 [2024-12-06 19:26:51.773811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-12-06 19:26:51.783526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.785 [2024-12-06 19:26:51.783615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.785 [2024-12-06 19:26:51.783639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.785 [2024-12-06 19:26:51.783653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.785 [2024-12-06 19:26:51.783665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.785 [2024-12-06 19:26:51.783694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-12-06 19:26:51.793669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.785 [2024-12-06 19:26:51.793783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.785 [2024-12-06 19:26:51.793810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.785 [2024-12-06 19:26:51.793825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.785 [2024-12-06 19:26:51.793838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.785 [2024-12-06 19:26:51.793868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.785 qpair failed and we were unable to recover it. 00:28:06.785 [2024-12-06 19:26:51.803578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.785 [2024-12-06 19:26:51.803660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.785 [2024-12-06 19:26:51.803683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.785 [2024-12-06 19:26:51.803697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.786 [2024-12-06 19:26:51.803733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.786 [2024-12-06 19:26:51.803770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-12-06 19:26:51.813622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.786 [2024-12-06 19:26:51.813742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.786 [2024-12-06 19:26:51.813767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.786 [2024-12-06 19:26:51.813781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.786 [2024-12-06 19:26:51.813794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.786 [2024-12-06 19:26:51.813824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.786 qpair failed and we were unable to recover it. 00:28:06.786 [2024-12-06 19:26:51.823641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.786 [2024-12-06 19:26:51.823755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.786 [2024-12-06 19:26:51.823781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.786 [2024-12-06 19:26:51.823796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.786 [2024-12-06 19:26:51.823809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:06.786 [2024-12-06 19:26:51.823838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.786 qpair failed and we were unable to recover it. 00:28:07.047 [2024-12-06 19:26:51.833681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.047 [2024-12-06 19:26:51.833811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.047 [2024-12-06 19:26:51.833837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.047 [2024-12-06 19:26:51.833852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.047 [2024-12-06 19:26:51.833864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.047 [2024-12-06 19:26:51.833893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-12-06 19:26:51.843714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.047 [2024-12-06 19:26:51.843829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.047 [2024-12-06 19:26:51.843855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.047 [2024-12-06 19:26:51.843870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.047 [2024-12-06 19:26:51.843883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.047 [2024-12-06 19:26:51.843913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-12-06 19:26:51.853767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.047 [2024-12-06 19:26:51.853875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.047 [2024-12-06 19:26:51.853901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.047 [2024-12-06 19:26:51.853916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.047 [2024-12-06 19:26:51.853929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.047 [2024-12-06 19:26:51.853959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-12-06 19:26:51.863787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.047 [2024-12-06 19:26:51.863871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.047 [2024-12-06 19:26:51.863896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.047 [2024-12-06 19:26:51.863911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.047 [2024-12-06 19:26:51.863924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.047 [2024-12-06 19:26:51.863954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-12-06 19:26:51.873797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.047 [2024-12-06 19:26:51.873885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.047 [2024-12-06 19:26:51.873909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.047 [2024-12-06 19:26:51.873924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.047 [2024-12-06 19:26:51.873936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.047 [2024-12-06 19:26:51.873966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-12-06 19:26:51.883819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.047 [2024-12-06 19:26:51.883904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.047 [2024-12-06 19:26:51.883929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.047 [2024-12-06 19:26:51.883943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.047 [2024-12-06 19:26:51.883956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.047 [2024-12-06 19:26:51.883985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-12-06 19:26:51.893858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.047 [2024-12-06 19:26:51.893955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.047 [2024-12-06 19:26:51.893979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.047 [2024-12-06 19:26:51.894000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.047 [2024-12-06 19:26:51.894013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.047 [2024-12-06 19:26:51.894057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.047 qpair failed and we were unable to recover it. 00:28:07.047 [2024-12-06 19:26:51.903888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.047 [2024-12-06 19:26:51.903998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.047 [2024-12-06 19:26:51.904038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.047 [2024-12-06 19:26:51.904054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.048 [2024-12-06 19:26:51.904066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.048 [2024-12-06 19:26:51.904095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-12-06 19:26:51.914000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.048 [2024-12-06 19:26:51.914099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.048 [2024-12-06 19:26:51.914122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.048 [2024-12-06 19:26:51.914136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.048 [2024-12-06 19:26:51.914148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.048 [2024-12-06 19:26:51.914176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-12-06 19:26:51.923939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.048 [2024-12-06 19:26:51.924043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.048 [2024-12-06 19:26:51.924066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.048 [2024-12-06 19:26:51.924080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.048 [2024-12-06 19:26:51.924093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.048 [2024-12-06 19:26:51.924121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-12-06 19:26:51.934093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.048 [2024-12-06 19:26:51.934186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.048 [2024-12-06 19:26:51.934210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.048 [2024-12-06 19:26:51.934225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.048 [2024-12-06 19:26:51.934237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.048 [2024-12-06 19:26:51.934271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-12-06 19:26:51.944006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.048 [2024-12-06 19:26:51.944108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.048 [2024-12-06 19:26:51.944132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.048 [2024-12-06 19:26:51.944147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.048 [2024-12-06 19:26:51.944160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.048 [2024-12-06 19:26:51.944188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-12-06 19:26:51.954041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.048 [2024-12-06 19:26:51.954146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.048 [2024-12-06 19:26:51.954171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.048 [2024-12-06 19:26:51.954186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.048 [2024-12-06 19:26:51.954198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.048 [2024-12-06 19:26:51.954226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-12-06 19:26:51.964082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.048 [2024-12-06 19:26:51.964189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.048 [2024-12-06 19:26:51.964215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.048 [2024-12-06 19:26:51.964230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.048 [2024-12-06 19:26:51.964242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.048 [2024-12-06 19:26:51.964270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-12-06 19:26:51.974105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.048 [2024-12-06 19:26:51.974204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.048 [2024-12-06 19:26:51.974228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.048 [2024-12-06 19:26:51.974242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.048 [2024-12-06 19:26:51.974254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.048 [2024-12-06 19:26:51.974282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-12-06 19:26:51.984181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.048 [2024-12-06 19:26:51.984275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.048 [2024-12-06 19:26:51.984300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.048 [2024-12-06 19:26:51.984316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.048 [2024-12-06 19:26:51.984328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.048 [2024-12-06 19:26:51.984356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-12-06 19:26:51.994128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.048 [2024-12-06 19:26:51.994213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.048 [2024-12-06 19:26:51.994237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.048 [2024-12-06 19:26:51.994251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.048 [2024-12-06 19:26:51.994264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.048 [2024-12-06 19:26:51.994293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-12-06 19:26:52.004159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.048 [2024-12-06 19:26:52.004276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.048 [2024-12-06 19:26:52.004302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.048 [2024-12-06 19:26:52.004317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.048 [2024-12-06 19:26:52.004329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.048 [2024-12-06 19:26:52.004358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-12-06 19:26:52.014216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.048 [2024-12-06 19:26:52.014315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.048 [2024-12-06 19:26:52.014338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.048 [2024-12-06 19:26:52.014353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.048 [2024-12-06 19:26:52.014365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.048 [2024-12-06 19:26:52.014394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-12-06 19:26:52.024233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.048 [2024-12-06 19:26:52.024361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.048 [2024-12-06 19:26:52.024386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.048 [2024-12-06 19:26:52.024405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.048 [2024-12-06 19:26:52.024419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.048 [2024-12-06 19:26:52.024447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.048 qpair failed and we were unable to recover it. 00:28:07.048 [2024-12-06 19:26:52.034224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.048 [2024-12-06 19:26:52.034310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.048 [2024-12-06 19:26:52.034336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.048 [2024-12-06 19:26:52.034350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.048 [2024-12-06 19:26:52.034362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.049 [2024-12-06 19:26:52.034391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-12-06 19:26:52.044257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.049 [2024-12-06 19:26:52.044338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.049 [2024-12-06 19:26:52.044364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.049 [2024-12-06 19:26:52.044379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.049 [2024-12-06 19:26:52.044392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.049 [2024-12-06 19:26:52.044421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-12-06 19:26:52.054400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.049 [2024-12-06 19:26:52.054492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.049 [2024-12-06 19:26:52.054517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.049 [2024-12-06 19:26:52.054532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.049 [2024-12-06 19:26:52.054544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.049 [2024-12-06 19:26:52.054572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-12-06 19:26:52.064404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.049 [2024-12-06 19:26:52.064486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.049 [2024-12-06 19:26:52.064509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.049 [2024-12-06 19:26:52.064523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.049 [2024-12-06 19:26:52.064536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.049 [2024-12-06 19:26:52.064568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-12-06 19:26:52.074343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.049 [2024-12-06 19:26:52.074430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.049 [2024-12-06 19:26:52.074456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.049 [2024-12-06 19:26:52.074472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.049 [2024-12-06 19:26:52.074485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.049 [2024-12-06 19:26:52.074514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-12-06 19:26:52.084378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.049 [2024-12-06 19:26:52.084462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.049 [2024-12-06 19:26:52.084486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.049 [2024-12-06 19:26:52.084500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.049 [2024-12-06 19:26:52.084513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.049 [2024-12-06 19:26:52.084542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.049 [2024-12-06 19:26:52.094469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.049 [2024-12-06 19:26:52.094562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.049 [2024-12-06 19:26:52.094589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.049 [2024-12-06 19:26:52.094604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.049 [2024-12-06 19:26:52.094617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.049 [2024-12-06 19:26:52.094647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.049 qpair failed and we were unable to recover it. 00:28:07.308 [2024-12-06 19:26:52.104476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.308 [2024-12-06 19:26:52.104568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.308 [2024-12-06 19:26:52.104594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.308 [2024-12-06 19:26:52.104609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.308 [2024-12-06 19:26:52.104621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.308 [2024-12-06 19:26:52.104650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.308 qpair failed and we were unable to recover it. 00:28:07.308 [2024-12-06 19:26:52.114459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.308 [2024-12-06 19:26:52.114597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.308 [2024-12-06 19:26:52.114621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.308 [2024-12-06 19:26:52.114635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.308 [2024-12-06 19:26:52.114646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.308 [2024-12-06 19:26:52.114675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.308 qpair failed and we were unable to recover it. 00:28:07.308 [2024-12-06 19:26:52.124511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.308 [2024-12-06 19:26:52.124593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.308 [2024-12-06 19:26:52.124620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.308 [2024-12-06 19:26:52.124634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.308 [2024-12-06 19:26:52.124646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.308 [2024-12-06 19:26:52.124674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.308 qpair failed and we were unable to recover it. 00:28:07.308 [2024-12-06 19:26:52.134530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.308 [2024-12-06 19:26:52.134615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.308 [2024-12-06 19:26:52.134639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.308 [2024-12-06 19:26:52.134654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.308 [2024-12-06 19:26:52.134666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.308 [2024-12-06 19:26:52.134694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.308 qpair failed and we were unable to recover it. 00:28:07.308 [2024-12-06 19:26:52.144639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.308 [2024-12-06 19:26:52.144753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.308 [2024-12-06 19:26:52.144777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.308 [2024-12-06 19:26:52.144792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.308 [2024-12-06 19:26:52.144804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.308 [2024-12-06 19:26:52.144834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.308 qpair failed and we were unable to recover it. 00:28:07.308 [2024-12-06 19:26:52.154596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.308 [2024-12-06 19:26:52.154676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.308 [2024-12-06 19:26:52.154701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.308 [2024-12-06 19:26:52.154743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.308 [2024-12-06 19:26:52.154761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.308 [2024-12-06 19:26:52.154790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.308 qpair failed and we were unable to recover it. 00:28:07.308 [2024-12-06 19:26:52.164690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.308 [2024-12-06 19:26:52.164834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.308 [2024-12-06 19:26:52.164861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.308 [2024-12-06 19:26:52.164876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.308 [2024-12-06 19:26:52.164889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.308 [2024-12-06 19:26:52.164919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.309 qpair failed and we were unable to recover it. 00:28:07.309 [2024-12-06 19:26:52.174634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.309 [2024-12-06 19:26:52.174748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.309 [2024-12-06 19:26:52.174773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.309 [2024-12-06 19:26:52.174787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.309 [2024-12-06 19:26:52.174800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.309 [2024-12-06 19:26:52.174830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.309 qpair failed and we were unable to recover it. 00:28:07.309 [2024-12-06 19:26:52.184685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.309 [2024-12-06 19:26:52.184843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.309 [2024-12-06 19:26:52.184869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.309 [2024-12-06 19:26:52.184884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.309 [2024-12-06 19:26:52.184897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.309 [2024-12-06 19:26:52.184926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.309 qpair failed and we were unable to recover it. 00:28:07.309 [2024-12-06 19:26:52.194688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.309 [2024-12-06 19:26:52.194797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.309 [2024-12-06 19:26:52.194824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.309 [2024-12-06 19:26:52.194838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.309 [2024-12-06 19:26:52.194851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.309 [2024-12-06 19:26:52.194886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.309 qpair failed and we were unable to recover it. 00:28:07.309 [2024-12-06 19:26:52.204773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.309 [2024-12-06 19:26:52.204864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.309 [2024-12-06 19:26:52.204890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.309 [2024-12-06 19:26:52.204904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.309 [2024-12-06 19:26:52.204917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.309 [2024-12-06 19:26:52.204947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.309 qpair failed and we were unable to recover it. 00:28:07.309 [2024-12-06 19:26:52.214809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.309 [2024-12-06 19:26:52.214940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.309 [2024-12-06 19:26:52.214966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.309 [2024-12-06 19:26:52.214980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.309 [2024-12-06 19:26:52.214993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.309 [2024-12-06 19:26:52.215047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.309 qpair failed and we were unable to recover it. 00:28:07.309 [2024-12-06 19:26:52.224810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.309 [2024-12-06 19:26:52.224901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.309 [2024-12-06 19:26:52.224927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.309 [2024-12-06 19:26:52.224942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.309 [2024-12-06 19:26:52.224954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.309 [2024-12-06 19:26:52.224984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.309 qpair failed and we were unable to recover it. 00:28:07.309 [2024-12-06 19:26:52.234803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.309 [2024-12-06 19:26:52.234904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.309 [2024-12-06 19:26:52.234930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.309 [2024-12-06 19:26:52.234945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.309 [2024-12-06 19:26:52.234957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.309 [2024-12-06 19:26:52.234987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.309 qpair failed and we were unable to recover it. 00:28:07.309 [2024-12-06 19:26:52.244848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.309 [2024-12-06 19:26:52.244942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.309 [2024-12-06 19:26:52.244967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.309 [2024-12-06 19:26:52.244982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.309 [2024-12-06 19:26:52.244994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11dc5d0 00:28:07.309 [2024-12-06 19:26:52.245039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:07.309 qpair failed and we were unable to recover it. 00:28:07.309 [2024-12-06 19:26:52.254896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.309 [2024-12-06 19:26:52.254990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.309 [2024-12-06 19:26:52.255024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.309 [2024-12-06 19:26:52.255041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.309 [2024-12-06 19:26:52.255054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5930000b90 00:28:07.309 [2024-12-06 19:26:52.255086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.309 qpair failed and we were unable to recover it. 00:28:07.309 [2024-12-06 19:26:52.264891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.309 [2024-12-06 19:26:52.264982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.309 [2024-12-06 19:26:52.265010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.309 [2024-12-06 19:26:52.265026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.309 [2024-12-06 19:26:52.265039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5930000b90 00:28:07.309 [2024-12-06 19:26:52.265070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:07.309 qpair failed and we were unable to recover it. 00:28:07.310 [2024-12-06 19:26:52.265196] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:28:07.310 A controller has encountered a failure and is being reset. 00:28:07.310 Controller properly reset. 00:28:07.310 Initializing NVMe Controllers 00:28:07.310 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:07.310 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:07.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:07.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:07.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:07.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:07.310 Initialization complete. Launching workers. 00:28:07.310 Starting thread on core 1 00:28:07.310 Starting thread on core 2 00:28:07.310 Starting thread on core 3 00:28:07.310 Starting thread on core 0 00:28:07.310 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:07.310 00:28:07.310 real 0m10.749s 00:28:07.310 user 0m19.269s 00:28:07.310 sys 0m5.484s 00:28:07.310 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:07.310 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.310 ************************************ 00:28:07.310 END TEST nvmf_target_disconnect_tc2 00:28:07.310 ************************************ 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:07.568 rmmod nvme_tcp 00:28:07.568 rmmod nvme_fabrics 00:28:07.568 rmmod nvme_keyring 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 330509 ']' 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 330509 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 330509 ']' 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 330509 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 330509 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 330509' 00:28:07.568 killing process with pid 330509 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 330509 00:28:07.568 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 330509 00:28:07.826 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:07.826 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:07.826 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:07.826 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:07.826 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:07.826 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:07.826 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:07.826 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:07.826 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:07.826 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.826 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.826 19:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.362 19:26:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:10.362 00:28:10.362 real 0m15.826s 00:28:10.362 user 0m45.467s 00:28:10.362 sys 0m7.632s 00:28:10.362 19:26:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:10.362 19:26:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:10.362 ************************************ 00:28:10.362 END TEST nvmf_target_disconnect 00:28:10.363 ************************************ 00:28:10.363 19:26:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:10.363 00:28:10.363 real 5m12.338s 00:28:10.363 user 11m0.879s 00:28:10.363 sys 1m17.442s 00:28:10.363 19:26:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:10.363 19:26:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.363 ************************************ 00:28:10.363 END TEST nvmf_host 00:28:10.363 ************************************ 00:28:10.363 19:26:54 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:10.363 19:26:54 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:10.363 19:26:54 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:10.363 19:26:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:10.363 19:26:54 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:10.363 19:26:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:10.363 ************************************ 00:28:10.363 START TEST nvmf_target_core_interrupt_mode 00:28:10.363 ************************************ 00:28:10.363 19:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:10.363 * Looking for test storage... 00:28:10.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:10.363 19:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:10.363 19:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:28:10.363 19:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:10.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.363 --rc genhtml_branch_coverage=1 00:28:10.363 --rc genhtml_function_coverage=1 00:28:10.363 --rc genhtml_legend=1 00:28:10.363 --rc geninfo_all_blocks=1 00:28:10.363 --rc geninfo_unexecuted_blocks=1 00:28:10.363 00:28:10.363 ' 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:10.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.363 --rc genhtml_branch_coverage=1 00:28:10.363 --rc genhtml_function_coverage=1 00:28:10.363 --rc genhtml_legend=1 00:28:10.363 --rc geninfo_all_blocks=1 00:28:10.363 --rc geninfo_unexecuted_blocks=1 00:28:10.363 00:28:10.363 ' 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:10.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.363 --rc genhtml_branch_coverage=1 00:28:10.363 --rc genhtml_function_coverage=1 00:28:10.363 --rc genhtml_legend=1 00:28:10.363 --rc geninfo_all_blocks=1 00:28:10.363 --rc geninfo_unexecuted_blocks=1 00:28:10.363 00:28:10.363 ' 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:10.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.363 --rc genhtml_branch_coverage=1 00:28:10.363 --rc genhtml_function_coverage=1 00:28:10.363 --rc genhtml_legend=1 00:28:10.363 --rc geninfo_all_blocks=1 00:28:10.363 --rc geninfo_unexecuted_blocks=1 00:28:10.363 00:28:10.363 ' 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.363 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:10.364 ************************************ 00:28:10.364 START TEST nvmf_abort 00:28:10.364 ************************************ 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:10.364 * Looking for test storage... 00:28:10.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:10.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.364 --rc genhtml_branch_coverage=1 00:28:10.364 --rc genhtml_function_coverage=1 00:28:10.364 --rc genhtml_legend=1 00:28:10.364 --rc geninfo_all_blocks=1 00:28:10.364 --rc geninfo_unexecuted_blocks=1 00:28:10.364 00:28:10.364 ' 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:10.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.364 --rc genhtml_branch_coverage=1 00:28:10.364 --rc genhtml_function_coverage=1 00:28:10.364 --rc genhtml_legend=1 00:28:10.364 --rc geninfo_all_blocks=1 00:28:10.364 --rc geninfo_unexecuted_blocks=1 00:28:10.364 00:28:10.364 ' 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:10.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.364 --rc genhtml_branch_coverage=1 00:28:10.364 --rc genhtml_function_coverage=1 00:28:10.364 --rc genhtml_legend=1 00:28:10.364 --rc geninfo_all_blocks=1 00:28:10.364 --rc geninfo_unexecuted_blocks=1 00:28:10.364 00:28:10.364 ' 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:10.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.364 --rc genhtml_branch_coverage=1 00:28:10.364 --rc genhtml_function_coverage=1 00:28:10.364 --rc genhtml_legend=1 00:28:10.364 --rc geninfo_all_blocks=1 00:28:10.364 --rc geninfo_unexecuted_blocks=1 00:28:10.364 00:28:10.364 ' 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:10.364 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:10.365 19:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.266 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.266 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:12.267 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:12.267 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:12.267 Found net devices under 0000:84:00.0: cvl_0_0 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:12.267 Found net devices under 0000:84:00.1: cvl_0_1 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:12.267 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:12.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:28:12.527 00:28:12.527 --- 10.0.0.2 ping statistics --- 00:28:12.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.527 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:28:12.527 00:28:12.527 --- 10.0.0.1 ping statistics --- 00:28:12.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.527 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=333337 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 333337 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 333337 ']' 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:12.527 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.527 [2024-12-06 19:26:57.496795] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:12.527 [2024-12-06 19:26:57.497985] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:28:12.527 [2024-12-06 19:26:57.498056] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.527 [2024-12-06 19:26:57.573359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:12.786 [2024-12-06 19:26:57.635838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.786 [2024-12-06 19:26:57.635905] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.786 [2024-12-06 19:26:57.635918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.786 [2024-12-06 19:26:57.635930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.786 [2024-12-06 19:26:57.635941] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.786 [2024-12-06 19:26:57.637550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.786 [2024-12-06 19:26:57.637615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.786 [2024-12-06 19:26:57.637619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.786 [2024-12-06 19:26:57.739134] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:12.786 [2024-12-06 19:26:57.739340] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:12.786 [2024-12-06 19:26:57.739377] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:12.786 [2024-12-06 19:26:57.739612] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:12.786 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.786 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:12.786 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:12.786 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:12.786 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.786 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.786 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:12.786 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.786 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.786 [2024-12-06 19:26:57.786350] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.786 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.786 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:12.786 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.786 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.786 Malloc0 00:28:12.786 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.786 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:12.786 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.786 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:13.046 Delay0 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:13.046 [2024-12-06 19:26:57.858549] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.046 19:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:13.046 [2024-12-06 19:26:57.966879] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:15.578 Initializing NVMe Controllers 00:28:15.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:15.578 controller IO queue size 128 less than required 00:28:15.578 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:15.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:15.578 Initialization complete. Launching workers. 00:28:15.578 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27738 00:28:15.578 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27795, failed to submit 66 00:28:15.578 success 27738, unsuccessful 57, failed 0 00:28:15.578 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:15.579 rmmod nvme_tcp 00:28:15.579 rmmod nvme_fabrics 00:28:15.579 rmmod nvme_keyring 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 333337 ']' 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 333337 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 333337 ']' 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 333337 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333337 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333337' 00:28:15.579 killing process with pid 333337 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 333337 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 333337 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.579 19:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.483 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:17.483 00:28:17.483 real 0m7.437s 00:28:17.483 user 0m9.460s 00:28:17.483 sys 0m3.125s 00:28:17.483 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:17.483 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:17.483 ************************************ 00:28:17.483 END TEST nvmf_abort 00:28:17.483 ************************************ 00:28:17.483 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:17.483 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:17.483 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:17.483 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:17.766 ************************************ 00:28:17.766 START TEST nvmf_ns_hotplug_stress 00:28:17.766 ************************************ 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:17.766 * Looking for test storage... 00:28:17.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.766 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:17.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.767 --rc genhtml_branch_coverage=1 00:28:17.767 --rc genhtml_function_coverage=1 00:28:17.767 --rc genhtml_legend=1 00:28:17.767 --rc geninfo_all_blocks=1 00:28:17.767 --rc geninfo_unexecuted_blocks=1 00:28:17.767 00:28:17.767 ' 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:17.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.767 --rc genhtml_branch_coverage=1 00:28:17.767 --rc genhtml_function_coverage=1 00:28:17.767 --rc genhtml_legend=1 00:28:17.767 --rc geninfo_all_blocks=1 00:28:17.767 --rc geninfo_unexecuted_blocks=1 00:28:17.767 00:28:17.767 ' 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:17.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.767 --rc genhtml_branch_coverage=1 00:28:17.767 --rc genhtml_function_coverage=1 00:28:17.767 --rc genhtml_legend=1 00:28:17.767 --rc geninfo_all_blocks=1 00:28:17.767 --rc geninfo_unexecuted_blocks=1 00:28:17.767 00:28:17.767 ' 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:17.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.767 --rc genhtml_branch_coverage=1 00:28:17.767 --rc genhtml_function_coverage=1 00:28:17.767 --rc genhtml_legend=1 00:28:17.767 --rc geninfo_all_blocks=1 00:28:17.767 --rc geninfo_unexecuted_blocks=1 00:28:17.767 00:28:17.767 ' 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:17.767 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:17.768 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.768 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.768 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.768 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:17.768 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:17.768 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:17.768 19:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:19.843 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:19.843 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:19.843 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:19.844 Found net devices under 0000:84:00.0: cvl_0_0 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:19.844 Found net devices under 0000:84:00.1: cvl_0_1 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.844 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.102 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.102 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.102 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:20.102 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.102 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.102 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.102 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:20.102 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:20.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:28:20.102 00:28:20.102 --- 10.0.0.2 ping statistics --- 00:28:20.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.102 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:28:20.102 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:28:20.102 00:28:20.102 --- 10.0.0.1 ping statistics --- 00:28:20.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.102 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:28:20.102 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.102 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:20.102 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:20.102 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=335781 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 335781 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 335781 ']' 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.103 19:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:20.103 [2024-12-06 19:27:05.033596] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:20.103 [2024-12-06 19:27:05.034782] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:28:20.103 [2024-12-06 19:27:05.034842] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.103 [2024-12-06 19:27:05.109757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:20.360 [2024-12-06 19:27:05.167990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.360 [2024-12-06 19:27:05.168055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.360 [2024-12-06 19:27:05.168078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.360 [2024-12-06 19:27:05.168090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.360 [2024-12-06 19:27:05.168099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.360 [2024-12-06 19:27:05.169686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.360 [2024-12-06 19:27:05.169745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:20.360 [2024-12-06 19:27:05.169751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.360 [2024-12-06 19:27:05.257425] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:20.360 [2024-12-06 19:27:05.257642] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:20.360 [2024-12-06 19:27:05.257658] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:20.360 [2024-12-06 19:27:05.257917] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:20.360 19:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.360 19:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:20.360 19:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:20.360 19:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:20.360 19:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:20.360 19:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.360 19:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:20.360 19:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:20.619 [2024-12-06 19:27:05.574444] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.619 19:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:20.878 19:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.138 [2024-12-06 19:27:06.146759] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.138 19:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:21.706 19:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:21.706 Malloc0 00:28:21.706 19:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:22.271 Delay0 00:28:22.271 19:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.271 19:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:22.529 NULL1 00:28:22.787 19:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:23.045 19:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=336419 00:28:23.045 19:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:23.045 19:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:23.046 19:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.981 Read completed with error (sct=0, sc=11) 00:28:23.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:23.981 19:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.497 19:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:24.497 19:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:24.756 true 00:28:24.756 19:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:24.756 19:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:25.322 19:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:25.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:25.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:25.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:25.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:25.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:25.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:25.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:25.839 19:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:25.839 19:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:26.098 true 00:28:26.098 19:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:26.098 19:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.036 19:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:27.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.036 19:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:27.036 19:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:27.295 true 00:28:27.295 19:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:27.295 19:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.229 19:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.487 19:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:28.487 19:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:28.744 true 00:28:28.744 19:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:28.744 19:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.002 19:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:29.258 19:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:29.258 19:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:29.516 true 00:28:29.516 19:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:29.516 19:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.774 19:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:30.039 19:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:30.039 19:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:30.297 true 00:28:30.297 19:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:30.297 19:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.234 19:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:31.491 19:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:31.491 19:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:31.748 true 00:28:31.748 19:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:31.748 19:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.004 19:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:32.261 19:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:32.261 19:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:32.518 true 00:28:32.518 19:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:32.518 19:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.086 19:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:33.086 19:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:33.086 19:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:33.653 true 00:28:33.653 19:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:33.653 19:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.591 19:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.591 19:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:34.591 19:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:34.850 true 00:28:34.850 19:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:34.850 19:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.109 19:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:35.367 19:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:35.367 19:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:35.626 true 00:28:35.626 19:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:35.626 19:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:36.193 19:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:36.193 19:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:36.193 19:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:36.759 true 00:28:36.759 19:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:36.759 19:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:37.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.584 19:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.842 19:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:37.842 19:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:38.100 true 00:28:38.100 19:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:38.100 19:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.358 19:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:38.616 19:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:38.616 19:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:38.873 true 00:28:38.873 19:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:38.873 19:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:39.130 19:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:39.388 19:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:39.388 19:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:39.646 true 00:28:39.646 19:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:39.646 19:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:40.583 19:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:40.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.098 19:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:41.098 19:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:41.356 true 00:28:41.356 19:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:41.356 19:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.615 19:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.873 19:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:41.873 19:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:42.131 true 00:28:42.131 19:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:42.131 19:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.389 19:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:42.648 19:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:42.648 19:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:42.906 true 00:28:42.907 19:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:42.907 19:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.842 19:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.842 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:44.100 19:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:44.100 19:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:44.357 true 00:28:44.357 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:44.357 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:44.615 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:44.873 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:44.873 19:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:45.132 true 00:28:45.132 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:45.132 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.390 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.649 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:45.649 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:45.907 true 00:28:45.907 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:45.907 19:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:46.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:46.844 19:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:46.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.103 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:47.103 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:47.362 true 00:28:47.362 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:47.621 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:47.880 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:48.141 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:48.141 19:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:48.399 true 00:28:48.399 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:48.399 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.657 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:48.916 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:48.916 19:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:49.174 true 00:28:49.174 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:49.174 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.108 19:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:50.366 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:50.366 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:50.622 true 00:28:50.622 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:50.622 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.880 19:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:51.138 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:51.138 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:51.395 true 00:28:51.395 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:51.395 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.652 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:51.910 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:51.910 19:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:52.167 true 00:28:52.167 19:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:52.167 19:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:53.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.101 Initializing NVMe Controllers 00:28:53.101 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.101 Controller IO queue size 128, less than required. 00:28:53.101 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.101 Controller IO queue size 128, less than required. 00:28:53.101 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:53.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:53.101 Initialization complete. Launching workers. 00:28:53.101 ======================================================== 00:28:53.101 Latency(us) 00:28:53.101 Device Information : IOPS MiB/s Average min max 00:28:53.101 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1198.78 0.59 45730.68 3018.73 1169201.82 00:28:53.101 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9151.31 4.47 13988.26 2634.52 539832.89 00:28:53.101 ======================================================== 00:28:53.101 Total : 10350.09 5.05 17664.78 2634.52 1169201.82 00:28:53.101 00:28:53.101 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:53.359 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:53.359 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:53.617 true 00:28:53.617 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 336419 00:28:53.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (336419) - No such process 00:28:53.617 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 336419 00:28:53.617 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:53.874 19:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:54.133 19:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:54.133 19:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:54.133 19:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:54.133 19:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:54.133 19:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:54.391 null0 00:28:54.391 19:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:54.391 19:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:54.391 19:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:54.649 null1 00:28:54.649 19:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:54.649 19:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:54.649 19:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:54.907 null2 00:28:54.907 19:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:54.907 19:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:54.907 19:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:55.165 null3 00:28:55.424 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:55.424 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:55.424 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:55.684 null4 00:28:55.684 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:55.684 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:55.684 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:55.945 null5 00:28:55.945 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:55.945 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:55.945 19:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:56.204 null6 00:28:56.204 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:56.204 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:56.204 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:56.463 null7 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:56.463 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.464 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:56.464 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.464 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:56.464 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.464 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.464 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 340671 340674 340677 340680 340684 340687 340690 340693 00:28:56.464 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.464 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:56.722 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:56.722 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:56.722 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:56.722 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:56.722 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:56.722 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:56.722 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:56.722 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.980 19:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:57.240 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:57.240 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:57.240 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.240 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:57.240 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:57.240 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:57.240 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:57.240 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:57.498 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.498 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.498 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:57.498 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.498 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.498 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:57.498 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.498 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.498 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:57.498 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.498 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.498 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:57.498 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.498 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.499 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:57.756 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.756 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.756 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:57.757 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.757 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.757 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:57.757 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.757 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.757 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:58.015 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.015 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:58.015 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:58.015 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:58.015 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:58.015 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:58.015 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:58.015 19:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.274 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.275 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:58.275 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.275 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.275 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:58.534 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.534 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:58.534 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:58.534 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:58.534 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:58.534 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:58.534 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:58.534 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.793 19:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:59.052 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:59.052 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.052 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:59.052 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:59.052 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:59.052 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:59.052 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:59.052 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:59.310 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.310 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.310 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:59.310 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.310 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.310 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:59.310 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.310 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.310 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:59.310 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.310 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.310 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:59.310 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.311 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.311 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:59.311 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.311 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.311 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:59.311 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.311 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.311 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:59.311 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.311 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.311 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:59.878 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:59.878 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:59.878 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.878 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:59.878 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:59.878 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:59.878 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:59.878 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:59.878 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.878 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.878 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.137 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.138 19:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:00.394 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:00.394 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.394 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:00.394 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:00.394 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:00.394 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:00.394 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:00.394 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.652 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:00.910 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:00.910 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.910 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:00.910 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:00.910 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:00.910 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:00.911 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:00.911 19:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.169 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:01.428 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:01.428 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:01.428 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:01.428 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:01.428 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:01.428 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:01.428 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:01.428 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.995 19:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:01.995 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.254 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:02.254 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:02.254 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:02.254 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:02.254 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:02.254 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:02.254 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:02.511 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:02.512 rmmod nvme_tcp 00:29:02.512 rmmod nvme_fabrics 00:29:02.512 rmmod nvme_keyring 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 335781 ']' 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 335781 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 335781 ']' 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 335781 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 335781 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 335781' 00:29:02.512 killing process with pid 335781 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 335781 00:29:02.512 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 335781 00:29:02.769 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:02.769 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:02.769 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:02.769 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:02.769 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:02.769 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:02.769 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:02.769 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:02.769 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:02.769 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.769 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.769 19:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:05.305 00:29:05.305 real 0m47.195s 00:29:05.305 user 3m17.159s 00:29:05.305 sys 0m21.826s 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:05.305 ************************************ 00:29:05.305 END TEST nvmf_ns_hotplug_stress 00:29:05.305 ************************************ 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:05.305 ************************************ 00:29:05.305 START TEST nvmf_delete_subsystem 00:29:05.305 ************************************ 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:05.305 * Looking for test storage... 00:29:05.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:05.305 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:05.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.306 --rc genhtml_branch_coverage=1 00:29:05.306 --rc genhtml_function_coverage=1 00:29:05.306 --rc genhtml_legend=1 00:29:05.306 --rc geninfo_all_blocks=1 00:29:05.306 --rc geninfo_unexecuted_blocks=1 00:29:05.306 00:29:05.306 ' 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:05.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.306 --rc genhtml_branch_coverage=1 00:29:05.306 --rc genhtml_function_coverage=1 00:29:05.306 --rc genhtml_legend=1 00:29:05.306 --rc geninfo_all_blocks=1 00:29:05.306 --rc geninfo_unexecuted_blocks=1 00:29:05.306 00:29:05.306 ' 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:05.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.306 --rc genhtml_branch_coverage=1 00:29:05.306 --rc genhtml_function_coverage=1 00:29:05.306 --rc genhtml_legend=1 00:29:05.306 --rc geninfo_all_blocks=1 00:29:05.306 --rc geninfo_unexecuted_blocks=1 00:29:05.306 00:29:05.306 ' 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:05.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.306 --rc genhtml_branch_coverage=1 00:29:05.306 --rc genhtml_function_coverage=1 00:29:05.306 --rc genhtml_legend=1 00:29:05.306 --rc geninfo_all_blocks=1 00:29:05.306 --rc geninfo_unexecuted_blocks=1 00:29:05.306 00:29:05.306 ' 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:05.306 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.307 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:05.307 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:05.307 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:05.307 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.307 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.307 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.307 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:05.307 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:05.307 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:05.307 19:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:07.211 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:07.211 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.211 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:07.212 Found net devices under 0000:84:00.0: cvl_0_0 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:07.212 Found net devices under 0000:84:00.1: cvl_0_1 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:07.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:29:07.212 00:29:07.212 --- 10.0.0.2 ping statistics --- 00:29:07.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.212 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:29:07.212 00:29:07.212 --- 10.0.0.1 ping statistics --- 00:29:07.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.212 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=343476 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 343476 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 343476 ']' 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.212 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:07.471 [2024-12-06 19:27:52.298024] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:07.471 [2024-12-06 19:27:52.299115] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:29:07.471 [2024-12-06 19:27:52.299186] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.471 [2024-12-06 19:27:52.371133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:07.471 [2024-12-06 19:27:52.423746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.471 [2024-12-06 19:27:52.423812] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.471 [2024-12-06 19:27:52.423837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.471 [2024-12-06 19:27:52.423847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.471 [2024-12-06 19:27:52.423856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.471 [2024-12-06 19:27:52.425278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.471 [2024-12-06 19:27:52.425284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.471 [2024-12-06 19:27:52.507085] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:07.471 [2024-12-06 19:27:52.507093] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:07.471 [2024-12-06 19:27:52.507348] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:07.730 [2024-12-06 19:27:52.565991] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:07.730 [2024-12-06 19:27:52.586224] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:07.730 NULL1 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:07.730 Delay0 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=343560 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:07.730 19:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:07.730 [2024-12-06 19:27:52.669906] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:09.629 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:09.629 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.629 19:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 starting I/O failed: -6 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 [2024-12-06 19:27:54.925447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe2cc000c40 is same with the state(6) to be set 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 starting I/O failed: -6 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.889 Read completed with error (sct=0, sc=8) 00:29:09.889 Write completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 starting I/O failed: -6 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 starting I/O failed: -6 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 starting I/O failed: -6 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 starting I/O failed: -6 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 starting I/O failed: -6 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 starting I/O failed: -6 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 starting I/O failed: -6 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 starting I/O failed: -6 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 starting I/O failed: -6 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 Write completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 Read completed with error (sct=0, sc=8) 00:29:09.890 starting I/O failed: -6 00:29:09.890 starting I/O failed: -6 00:29:09.890 starting I/O failed: -6 00:29:09.890 starting I/O failed: -6 00:29:09.890 starting I/O failed: -6 00:29:09.890 starting I/O failed: -6 00:29:11.262 [2024-12-06 19:27:55.888435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ae9b0 is same with the state(6) to be set 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 [2024-12-06 19:27:55.927118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe2cc00d7e0 is same with the state(6) to be set 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 [2024-12-06 19:27:55.927278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe2cc00d020 is same with the state(6) to be set 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 [2024-12-06 19:27:55.928144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ad680 is same with the state(6) to be set 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.262 Read completed with error (sct=0, sc=8) 00:29:11.262 Write completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Write completed with error (sct=0, sc=8) 00:29:11.263 Write completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Write completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Write completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Write completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Write completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Write completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Write completed with error (sct=0, sc=8) 00:29:11.263 Write completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Write completed with error (sct=0, sc=8) 00:29:11.263 Write completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 Read completed with error (sct=0, sc=8) 00:29:11.263 [2024-12-06 19:27:55.928958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ad2c0 is same with the state(6) to be set 00:29:11.263 Initializing NVMe Controllers 00:29:11.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:11.263 Controller IO queue size 128, less than required. 00:29:11.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:11.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:11.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:11.263 Initialization complete. Launching workers. 00:29:11.263 ======================================================== 00:29:11.263 Latency(us) 00:29:11.263 Device Information : IOPS MiB/s Average min max 00:29:11.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.53 0.09 906834.35 694.57 1013456.90 00:29:11.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.19 0.08 904309.22 591.35 1013622.70 00:29:11.263 ======================================================== 00:29:11.263 Total : 352.72 0.17 905644.59 591.35 1013622.70 00:29:11.263 00:29:11.263 [2024-12-06 19:27:55.929550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ae9b0 (9): Bad file descriptor 00:29:11.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:11.263 19:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.263 19:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:11.263 19:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 343560 00:29:11.263 19:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 343560 00:29:11.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (343560) - No such process 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 343560 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 343560 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 343560 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.521 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:11.521 [2024-12-06 19:27:56.454149] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.522 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.522 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.522 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.522 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:11.522 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.522 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=344021 00:29:11.522 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:11.522 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:11.522 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 344021 00:29:11.522 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:11.522 [2024-12-06 19:27:56.515618] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:12.085 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:12.085 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 344021 00:29:12.085 19:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:12.649 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:12.649 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 344021 00:29:12.649 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:13.213 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:13.213 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 344021 00:29:13.213 19:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:13.470 19:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:13.470 19:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 344021 00:29:13.470 19:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:14.034 19:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:14.034 19:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 344021 00:29:14.034 19:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:14.599 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:14.599 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 344021 00:29:14.599 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:14.857 Initializing NVMe Controllers 00:29:14.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:14.857 Controller IO queue size 128, less than required. 00:29:14.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:14.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:14.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:14.857 Initialization complete. Launching workers. 00:29:14.857 ======================================================== 00:29:14.857 Latency(us) 00:29:14.857 Device Information : IOPS MiB/s Average min max 00:29:14.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003628.91 1000152.67 1011505.88 00:29:14.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005069.91 1000162.00 1041989.40 00:29:14.857 ======================================================== 00:29:14.857 Total : 256.00 0.12 1004349.41 1000152.67 1041989.40 00:29:14.857 00:29:15.116 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:15.116 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 344021 00:29:15.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (344021) - No such process 00:29:15.116 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 344021 00:29:15.116 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:15.116 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:15.116 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:15.116 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:15.116 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.116 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:15.116 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.116 19:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.116 rmmod nvme_tcp 00:29:15.116 rmmod nvme_fabrics 00:29:15.116 rmmod nvme_keyring 00:29:15.116 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.116 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:15.116 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:15.116 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 343476 ']' 00:29:15.116 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 343476 00:29:15.116 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 343476 ']' 00:29:15.116 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 343476 00:29:15.116 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:15.116 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.116 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 343476 00:29:15.116 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:15.116 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:15.116 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 343476' 00:29:15.116 killing process with pid 343476 00:29:15.116 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 343476 00:29:15.116 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 343476 00:29:15.374 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:15.374 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:15.374 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:15.374 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:15.375 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:15.375 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:15.375 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:15.375 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.375 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.375 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.375 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.375 19:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.911 00:29:17.911 real 0m12.563s 00:29:17.911 user 0m25.144s 00:29:17.911 sys 0m3.763s 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:17.911 ************************************ 00:29:17.911 END TEST nvmf_delete_subsystem 00:29:17.911 ************************************ 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:17.911 ************************************ 00:29:17.911 START TEST nvmf_host_management 00:29:17.911 ************************************ 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:17.911 * Looking for test storage... 00:29:17.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:17.911 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:17.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.912 --rc genhtml_branch_coverage=1 00:29:17.912 --rc genhtml_function_coverage=1 00:29:17.912 --rc genhtml_legend=1 00:29:17.912 --rc geninfo_all_blocks=1 00:29:17.912 --rc geninfo_unexecuted_blocks=1 00:29:17.912 00:29:17.912 ' 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:17.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.912 --rc genhtml_branch_coverage=1 00:29:17.912 --rc genhtml_function_coverage=1 00:29:17.912 --rc genhtml_legend=1 00:29:17.912 --rc geninfo_all_blocks=1 00:29:17.912 --rc geninfo_unexecuted_blocks=1 00:29:17.912 00:29:17.912 ' 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:17.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.912 --rc genhtml_branch_coverage=1 00:29:17.912 --rc genhtml_function_coverage=1 00:29:17.912 --rc genhtml_legend=1 00:29:17.912 --rc geninfo_all_blocks=1 00:29:17.912 --rc geninfo_unexecuted_blocks=1 00:29:17.912 00:29:17.912 ' 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:17.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.912 --rc genhtml_branch_coverage=1 00:29:17.912 --rc genhtml_function_coverage=1 00:29:17.912 --rc genhtml_legend=1 00:29:17.912 --rc geninfo_all_blocks=1 00:29:17.912 --rc geninfo_unexecuted_blocks=1 00:29:17.912 00:29:17.912 ' 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:17.912 19:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:19.819 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:19.819 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.819 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:19.820 Found net devices under 0000:84:00.0: cvl_0_0 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:19.820 Found net devices under 0000:84:00.1: cvl_0_1 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:19.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:29:19.820 00:29:19.820 --- 10.0.0.2 ping statistics --- 00:29:19.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.820 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:29:19.820 00:29:19.820 --- 10.0.0.1 ping statistics --- 00:29:19.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.820 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=346386 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 346386 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 346386 ']' 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.820 19:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:20.079 [2024-12-06 19:28:04.897513] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:20.079 [2024-12-06 19:28:04.898599] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:29:20.079 [2024-12-06 19:28:04.898654] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.079 [2024-12-06 19:28:04.970475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:20.079 [2024-12-06 19:28:05.029340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.079 [2024-12-06 19:28:05.029395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.079 [2024-12-06 19:28:05.029420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.079 [2024-12-06 19:28:05.029432] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.079 [2024-12-06 19:28:05.029441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.079 [2024-12-06 19:28:05.031274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.079 [2024-12-06 19:28:05.031336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:20.079 [2024-12-06 19:28:05.034742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:20.079 [2024-12-06 19:28:05.034755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.079 [2024-12-06 19:28:05.125693] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:20.079 [2024-12-06 19:28:05.125950] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:20.079 [2024-12-06 19:28:05.126215] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:20.079 [2024-12-06 19:28:05.126898] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:20.079 [2024-12-06 19:28:05.127137] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:20.337 [2024-12-06 19:28:05.179374] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:20.337 Malloc0 00:29:20.337 [2024-12-06 19:28:05.259629] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=346542 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 346542 /var/tmp/bdevperf.sock 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 346542 ']' 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:20.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.337 { 00:29:20.337 "params": { 00:29:20.337 "name": "Nvme$subsystem", 00:29:20.337 "trtype": "$TEST_TRANSPORT", 00:29:20.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.337 "adrfam": "ipv4", 00:29:20.337 "trsvcid": "$NVMF_PORT", 00:29:20.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.337 "hdgst": ${hdgst:-false}, 00:29:20.337 "ddgst": ${ddgst:-false} 00:29:20.337 }, 00:29:20.337 "method": "bdev_nvme_attach_controller" 00:29:20.337 } 00:29:20.337 EOF 00:29:20.337 )") 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:20.337 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:20.337 "params": { 00:29:20.337 "name": "Nvme0", 00:29:20.337 "trtype": "tcp", 00:29:20.337 "traddr": "10.0.0.2", 00:29:20.337 "adrfam": "ipv4", 00:29:20.337 "trsvcid": "4420", 00:29:20.337 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:20.337 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:20.337 "hdgst": false, 00:29:20.337 "ddgst": false 00:29:20.337 }, 00:29:20.337 "method": "bdev_nvme_attach_controller" 00:29:20.337 }' 00:29:20.337 [2024-12-06 19:28:05.344561] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:29:20.337 [2024-12-06 19:28:05.344648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346542 ] 00:29:20.593 [2024-12-06 19:28:05.414839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.594 [2024-12-06 19:28:05.474264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.851 Running I/O for 10 seconds... 00:29:20.851 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:20.851 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:20.851 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:20.851 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.851 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:20.851 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.851 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:20.851 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:20.851 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:20.851 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:20.851 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:20.851 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:20.852 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:20.852 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:20.852 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:20.852 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.852 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:20.852 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:20.852 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.110 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:29:21.110 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:29:21.110 19:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:29:21.370 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:29:21.370 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:21.370 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:21.370 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:21.370 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.370 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:21.370 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.370 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:29:21.370 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:29:21.370 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:21.370 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:21.370 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:21.370 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:21.370 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.370 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:21.370 [2024-12-06 19:28:06.217165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.370 [2024-12-06 19:28:06.217976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.371 [2024-12-06 19:28:06.217988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.371 [2024-12-06 19:28:06.218000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.371 [2024-12-06 19:28:06.218019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.371 [2024-12-06 19:28:06.218035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19280c0 is same with the state(6) to be set 00:29:21.371 [2024-12-06 19:28:06.218175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.218980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.218996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.219026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.219043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.219081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.219097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.219111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.219126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.219140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.219156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.219170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.219185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.219199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.219215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.219244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.219261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.219276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.219292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.219307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.219323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.219338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.219354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.219368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.219384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.219398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.371 [2024-12-06 19:28:06.219416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.371 [2024-12-06 19:28:06.219431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.219450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.219466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.219482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.219496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.219513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.219528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.219544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.219559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.219575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.219590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.219606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.219621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.219637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.219651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.219667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.219682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.219698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.219713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.219737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.219761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.219777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.219792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.219808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.219823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.219839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.219858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.219875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.219890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.219906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.372 [2024-12-06 19:28:06.219921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.219941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.219956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.219972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.219987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.220003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.220024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.220040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.220055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.220071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:21.372 [2024-12-06 19:28:06.220085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.220102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.220117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.220132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.220147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.220163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.220178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.220194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.372 [2024-12-06 19:28:06.220210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.220232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.220247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.220263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.220278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.220295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.372 [2024-12-06 19:28:06.220309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:21.372 [2024-12-06 19:28:06.220324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09450 is same with the state(6) to be set 00:29:21.372 [2024-12-06 19:28:06.220492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.372 [2024-12-06 19:28:06.220515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.220531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.372 [2024-12-06 19:28:06.220545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.220558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.372 [2024-12-06 19:28:06.220572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.220587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.372 [2024-12-06 19:28:06.220601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.372 [2024-12-06 19:28:06.220613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf5c60 is same with the state(6) to be set 00:29:21.372 [2024-12-06 19:28:06.221784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:21.372 task offset: 73728 on job bdev=Nvme0n1 fails 00:29:21.372 00:29:21.372 Latency(us) 00:29:21.372 [2024-12-06T18:28:06.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.372 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:21.372 Job: Nvme0n1 ended in about 0.40 seconds with error 00:29:21.372 Verification LBA range: start 0x0 length 0x400 00:29:21.372 Nvme0n1 : 0.40 1425.18 89.07 158.35 0.00 39293.12 6310.87 33981.63 00:29:21.372 [2024-12-06T18:28:06.421Z] =================================================================================================================== 00:29:21.372 [2024-12-06T18:28:06.421Z] Total : 1425.18 89.07 158.35 0.00 39293.12 6310.87 33981.63 00:29:21.372 [2024-12-06 19:28:06.224926] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:21.372 [2024-12-06 19:28:06.224958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5c60 (9): Bad file descriptor 00:29:21.372 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.372 19:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:21.372 [2024-12-06 19:28:06.228910] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:22.307 19:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 346542 00:29:22.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (346542) - No such process 00:29:22.307 19:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:22.307 19:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:22.307 19:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:22.307 19:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:22.307 19:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:22.307 19:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:22.307 19:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.307 19:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.307 { 00:29:22.307 "params": { 00:29:22.307 "name": "Nvme$subsystem", 00:29:22.307 "trtype": "$TEST_TRANSPORT", 00:29:22.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.307 "adrfam": "ipv4", 00:29:22.307 "trsvcid": "$NVMF_PORT", 00:29:22.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.307 "hdgst": ${hdgst:-false}, 00:29:22.307 "ddgst": ${ddgst:-false} 00:29:22.307 }, 00:29:22.307 "method": "bdev_nvme_attach_controller" 00:29:22.307 } 00:29:22.307 EOF 00:29:22.307 )") 00:29:22.307 19:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:22.307 19:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:22.307 19:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:22.307 19:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:22.307 "params": { 00:29:22.307 "name": "Nvme0", 00:29:22.307 "trtype": "tcp", 00:29:22.307 "traddr": "10.0.0.2", 00:29:22.307 "adrfam": "ipv4", 00:29:22.307 "trsvcid": "4420", 00:29:22.307 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:22.307 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:22.307 "hdgst": false, 00:29:22.307 "ddgst": false 00:29:22.307 }, 00:29:22.307 "method": "bdev_nvme_attach_controller" 00:29:22.307 }' 00:29:22.307 [2024-12-06 19:28:07.276444] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:29:22.307 [2024-12-06 19:28:07.276530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346702 ] 00:29:22.307 [2024-12-06 19:28:07.347835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.566 [2024-12-06 19:28:07.407014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.824 Running I/O for 1 seconds... 00:29:23.766 1408.00 IOPS, 88.00 MiB/s 00:29:23.766 Latency(us) 00:29:23.766 [2024-12-06T18:28:08.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.766 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.766 Verification LBA range: start 0x0 length 0x400 00:29:23.766 Nvme0n1 : 1.04 1415.12 88.44 0.00 0.00 44468.68 7961.41 34952.53 00:29:23.766 [2024-12-06T18:28:08.816Z] =================================================================================================================== 00:29:23.767 [2024-12-06T18:28:08.816Z] Total : 1415.12 88.44 0.00 0.00 44468.68 7961.41 34952.53 00:29:24.025 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:24.025 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:24.025 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:24.025 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:24.025 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:24.025 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:24.026 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:24.026 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:24.026 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:24.026 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:24.026 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:24.026 rmmod nvme_tcp 00:29:24.026 rmmod nvme_fabrics 00:29:24.026 rmmod nvme_keyring 00:29:24.026 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:24.026 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:24.026 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:24.026 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 346386 ']' 00:29:24.026 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 346386 00:29:24.026 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 346386 ']' 00:29:24.026 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 346386 00:29:24.026 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:24.026 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.026 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 346386 00:29:24.285 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:24.285 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:24.285 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 346386' 00:29:24.285 killing process with pid 346386 00:29:24.285 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 346386 00:29:24.285 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 346386 00:29:24.285 [2024-12-06 19:28:09.311937] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:24.545 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:24.545 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:24.545 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:24.545 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:24.545 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:24.545 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:24.545 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:24.545 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:24.545 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:24.545 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.545 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.545 19:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.451 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:26.451 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:26.451 00:29:26.451 real 0m8.977s 00:29:26.451 user 0m18.348s 00:29:26.451 sys 0m3.774s 00:29:26.451 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:26.451 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.451 ************************************ 00:29:26.451 END TEST nvmf_host_management 00:29:26.451 ************************************ 00:29:26.451 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:26.451 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:26.451 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:26.451 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:26.451 ************************************ 00:29:26.451 START TEST nvmf_lvol 00:29:26.451 ************************************ 00:29:26.451 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:26.451 * Looking for test storage... 00:29:26.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:26.711 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:26.711 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:29:26.711 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:26.711 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:26.711 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:26.711 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:26.711 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:26.711 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:26.711 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:26.711 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:26.711 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:26.711 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:26.711 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:26.711 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:26.711 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:26.711 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:26.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.712 --rc genhtml_branch_coverage=1 00:29:26.712 --rc genhtml_function_coverage=1 00:29:26.712 --rc genhtml_legend=1 00:29:26.712 --rc geninfo_all_blocks=1 00:29:26.712 --rc geninfo_unexecuted_blocks=1 00:29:26.712 00:29:26.712 ' 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:26.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.712 --rc genhtml_branch_coverage=1 00:29:26.712 --rc genhtml_function_coverage=1 00:29:26.712 --rc genhtml_legend=1 00:29:26.712 --rc geninfo_all_blocks=1 00:29:26.712 --rc geninfo_unexecuted_blocks=1 00:29:26.712 00:29:26.712 ' 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:26.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.712 --rc genhtml_branch_coverage=1 00:29:26.712 --rc genhtml_function_coverage=1 00:29:26.712 --rc genhtml_legend=1 00:29:26.712 --rc geninfo_all_blocks=1 00:29:26.712 --rc geninfo_unexecuted_blocks=1 00:29:26.712 00:29:26.712 ' 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:26.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.712 --rc genhtml_branch_coverage=1 00:29:26.712 --rc genhtml_function_coverage=1 00:29:26.712 --rc genhtml_legend=1 00:29:26.712 --rc geninfo_all_blocks=1 00:29:26.712 --rc geninfo_unexecuted_blocks=1 00:29:26.712 00:29:26.712 ' 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.712 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.713 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:26.713 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:26.713 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:26.713 19:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:29.245 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:29.245 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:29.245 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:29.246 Found net devices under 0000:84:00.0: cvl_0_0 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:29.246 Found net devices under 0000:84:00.1: cvl_0_1 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:29.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:29:29.246 00:29:29.246 --- 10.0.0.2 ping statistics --- 00:29:29.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.246 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:29.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:29:29.246 00:29:29.246 --- 10.0.0.1 ping statistics --- 00:29:29.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.246 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=348920 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 348920 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 348920 ']' 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:29.246 19:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:29.246 [2024-12-06 19:28:14.032951] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:29.246 [2024-12-06 19:28:14.033973] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:29:29.246 [2024-12-06 19:28:14.034032] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.246 [2024-12-06 19:28:14.103120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:29.246 [2024-12-06 19:28:14.161258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.246 [2024-12-06 19:28:14.161311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.246 [2024-12-06 19:28:14.161333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:29.246 [2024-12-06 19:28:14.161344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:29.246 [2024-12-06 19:28:14.161354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.246 [2024-12-06 19:28:14.162887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.246 [2024-12-06 19:28:14.162948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:29.246 [2024-12-06 19:28:14.162952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.246 [2024-12-06 19:28:14.253466] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:29.246 [2024-12-06 19:28:14.253693] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:29.246 [2024-12-06 19:28:14.253706] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:29.246 [2024-12-06 19:28:14.253943] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:29.246 19:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:29.246 19:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:29.247 19:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:29.247 19:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:29.247 19:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:29.504 19:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:29.504 19:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:29.504 [2024-12-06 19:28:14.543658] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.770 19:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:30.028 19:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:30.028 19:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:30.286 19:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:30.286 19:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:30.542 19:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:30.799 19:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bb74f1ce-4ee1-49d8-9db1-974b20e9a18f 00:29:30.799 19:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bb74f1ce-4ee1-49d8-9db1-974b20e9a18f lvol 20 00:29:31.056 19:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f9795d21-b0e4-43dd-8b9f-070c55b8d62a 00:29:31.056 19:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:31.313 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f9795d21-b0e4-43dd-8b9f-070c55b8d62a 00:29:31.570 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:31.827 [2024-12-06 19:28:16.787813] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.827 19:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:32.086 19:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=349354 00:29:32.086 19:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:32.086 19:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:33.462 19:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f9795d21-b0e4-43dd-8b9f-070c55b8d62a MY_SNAPSHOT 00:29:33.462 19:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=fccc883c-d26c-4246-98ca-21f002ef3ced 00:29:33.462 19:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f9795d21-b0e4-43dd-8b9f-070c55b8d62a 30 00:29:34.031 19:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone fccc883c-d26c-4246-98ca-21f002ef3ced MY_CLONE 00:29:34.290 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4fe959d8-a2ae-4f79-9326-eaf7ebbc0df6 00:29:34.290 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4fe959d8-a2ae-4f79-9326-eaf7ebbc0df6 00:29:34.858 19:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 349354 00:29:43.001 Initializing NVMe Controllers 00:29:43.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:43.001 Controller IO queue size 128, less than required. 00:29:43.001 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:43.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:43.001 Initialization complete. Launching workers. 00:29:43.001 ======================================================== 00:29:43.001 Latency(us) 00:29:43.001 Device Information : IOPS MiB/s Average min max 00:29:43.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10496.10 41.00 12196.65 245.76 130004.34 00:29:43.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10384.60 40.56 12331.50 2939.08 76041.04 00:29:43.001 ======================================================== 00:29:43.001 Total : 20880.70 81.57 12263.71 245.76 130004.34 00:29:43.001 00:29:43.001 19:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:43.001 19:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f9795d21-b0e4-43dd-8b9f-070c55b8d62a 00:29:43.259 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bb74f1ce-4ee1-49d8-9db1-974b20e9a18f 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:43.517 rmmod nvme_tcp 00:29:43.517 rmmod nvme_fabrics 00:29:43.517 rmmod nvme_keyring 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 348920 ']' 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 348920 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 348920 ']' 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 348920 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 348920 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 348920' 00:29:43.517 killing process with pid 348920 00:29:43.517 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 348920 00:29:43.518 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 348920 00:29:43.776 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:43.776 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:43.776 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:43.776 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:43.776 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:43.776 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:43.776 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:43.776 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:43.776 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:43.776 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.776 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.776 19:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:46.317 00:29:46.317 real 0m19.307s 00:29:46.317 user 0m56.696s 00:29:46.317 sys 0m7.859s 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:46.317 ************************************ 00:29:46.317 END TEST nvmf_lvol 00:29:46.317 ************************************ 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:46.317 ************************************ 00:29:46.317 START TEST nvmf_lvs_grow 00:29:46.317 ************************************ 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:46.317 * Looking for test storage... 00:29:46.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:46.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.317 --rc genhtml_branch_coverage=1 00:29:46.317 --rc genhtml_function_coverage=1 00:29:46.317 --rc genhtml_legend=1 00:29:46.317 --rc geninfo_all_blocks=1 00:29:46.317 --rc geninfo_unexecuted_blocks=1 00:29:46.317 00:29:46.317 ' 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:46.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.317 --rc genhtml_branch_coverage=1 00:29:46.317 --rc genhtml_function_coverage=1 00:29:46.317 --rc genhtml_legend=1 00:29:46.317 --rc geninfo_all_blocks=1 00:29:46.317 --rc geninfo_unexecuted_blocks=1 00:29:46.317 00:29:46.317 ' 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:46.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.317 --rc genhtml_branch_coverage=1 00:29:46.317 --rc genhtml_function_coverage=1 00:29:46.317 --rc genhtml_legend=1 00:29:46.317 --rc geninfo_all_blocks=1 00:29:46.317 --rc geninfo_unexecuted_blocks=1 00:29:46.317 00:29:46.317 ' 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:46.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.317 --rc genhtml_branch_coverage=1 00:29:46.317 --rc genhtml_function_coverage=1 00:29:46.317 --rc genhtml_legend=1 00:29:46.317 --rc geninfo_all_blocks=1 00:29:46.317 --rc geninfo_unexecuted_blocks=1 00:29:46.317 00:29:46.317 ' 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.317 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:46.318 19:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:48.223 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:48.223 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.223 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:48.224 Found net devices under 0000:84:00.0: cvl_0_0 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:48.224 Found net devices under 0000:84:00.1: cvl_0_1 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.224 19:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:48.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:29:48.224 00:29:48.224 --- 10.0.0.2 ping statistics --- 00:29:48.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.224 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:29:48.224 00:29:48.224 --- 10.0.0.1 ping statistics --- 00:29:48.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.224 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=352620 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 352620 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 352620 ']' 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.224 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:48.224 [2024-12-06 19:28:33.140577] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:48.224 [2024-12-06 19:28:33.141662] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:29:48.224 [2024-12-06 19:28:33.141740] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.224 [2024-12-06 19:28:33.215439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.224 [2024-12-06 19:28:33.269755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.224 [2024-12-06 19:28:33.269810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.224 [2024-12-06 19:28:33.269832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.224 [2024-12-06 19:28:33.269843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.224 [2024-12-06 19:28:33.269852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.224 [2024-12-06 19:28:33.270498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.483 [2024-12-06 19:28:33.353093] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:48.483 [2024-12-06 19:28:33.353388] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:48.483 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.483 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:48.483 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:48.483 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:48.483 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:48.483 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.483 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:48.742 [2024-12-06 19:28:33.659187] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.742 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:48.742 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:48.742 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:48.742 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:48.742 ************************************ 00:29:48.742 START TEST lvs_grow_clean 00:29:48.742 ************************************ 00:29:48.742 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:48.742 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:48.742 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:48.742 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:48.742 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:48.742 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:48.742 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:48.742 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:48.742 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:48.742 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:49.001 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:49.001 19:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:49.260 19:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8073a870-b8a2-4506-b4db-38239ef008c6 00:29:49.260 19:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8073a870-b8a2-4506-b4db-38239ef008c6 00:29:49.260 19:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:49.518 19:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:49.518 19:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:49.519 19:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8073a870-b8a2-4506-b4db-38239ef008c6 lvol 150 00:29:49.776 19:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7b9bee13-be37-4f70-a0a2-de94a5119c04 00:29:49.776 19:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:49.776 19:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:50.033 [2024-12-06 19:28:35.071035] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:50.033 [2024-12-06 19:28:35.071149] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:50.033 true 00:29:50.291 19:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8073a870-b8a2-4506-b4db-38239ef008c6 00:29:50.291 19:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:50.551 19:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:50.551 19:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:50.809 19:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7b9bee13-be37-4f70-a0a2-de94a5119c04 00:29:51.068 19:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:51.327 [2024-12-06 19:28:36.163422] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.327 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:51.586 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=353059 00:29:51.586 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:51.586 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:51.586 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 353059 /var/tmp/bdevperf.sock 00:29:51.586 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 353059 ']' 00:29:51.586 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:51.586 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.586 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:51.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:51.586 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.586 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:51.586 [2024-12-06 19:28:36.507025] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:29:51.586 [2024-12-06 19:28:36.507109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353059 ] 00:29:51.586 [2024-12-06 19:28:36.578796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.844 [2024-12-06 19:28:36.640547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.844 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.844 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:51.844 19:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:52.469 Nvme0n1 00:29:52.469 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:52.728 [ 00:29:52.728 { 00:29:52.728 "name": "Nvme0n1", 00:29:52.728 "aliases": [ 00:29:52.728 "7b9bee13-be37-4f70-a0a2-de94a5119c04" 00:29:52.728 ], 00:29:52.728 "product_name": "NVMe disk", 00:29:52.728 "block_size": 4096, 00:29:52.728 "num_blocks": 38912, 00:29:52.728 "uuid": "7b9bee13-be37-4f70-a0a2-de94a5119c04", 00:29:52.728 "numa_id": 1, 00:29:52.728 "assigned_rate_limits": { 00:29:52.728 "rw_ios_per_sec": 0, 00:29:52.728 "rw_mbytes_per_sec": 0, 00:29:52.728 "r_mbytes_per_sec": 0, 00:29:52.728 "w_mbytes_per_sec": 0 00:29:52.728 }, 00:29:52.728 "claimed": false, 00:29:52.728 "zoned": false, 00:29:52.728 "supported_io_types": { 00:29:52.728 "read": true, 00:29:52.728 "write": true, 00:29:52.728 "unmap": true, 00:29:52.728 "flush": true, 00:29:52.728 "reset": true, 00:29:52.728 "nvme_admin": true, 00:29:52.728 "nvme_io": true, 00:29:52.728 "nvme_io_md": false, 00:29:52.728 "write_zeroes": true, 00:29:52.728 "zcopy": false, 00:29:52.728 "get_zone_info": false, 00:29:52.728 "zone_management": false, 00:29:52.728 "zone_append": false, 00:29:52.728 "compare": true, 00:29:52.729 "compare_and_write": true, 00:29:52.729 "abort": true, 00:29:52.729 "seek_hole": false, 00:29:52.729 "seek_data": false, 00:29:52.729 "copy": true, 00:29:52.729 "nvme_iov_md": false 00:29:52.729 }, 00:29:52.729 "memory_domains": [ 00:29:52.729 { 00:29:52.729 "dma_device_id": "system", 00:29:52.729 "dma_device_type": 1 00:29:52.729 } 00:29:52.729 ], 00:29:52.729 "driver_specific": { 00:29:52.729 "nvme": [ 00:29:52.729 { 00:29:52.729 "trid": { 00:29:52.729 "trtype": "TCP", 00:29:52.729 "adrfam": "IPv4", 00:29:52.729 "traddr": "10.0.0.2", 00:29:52.729 "trsvcid": "4420", 00:29:52.729 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:52.729 }, 00:29:52.729 "ctrlr_data": { 00:29:52.729 "cntlid": 1, 00:29:52.729 "vendor_id": "0x8086", 00:29:52.729 "model_number": "SPDK bdev Controller", 00:29:52.729 "serial_number": "SPDK0", 00:29:52.729 "firmware_revision": "25.01", 00:29:52.729 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:52.729 "oacs": { 00:29:52.729 "security": 0, 00:29:52.729 "format": 0, 00:29:52.729 "firmware": 0, 00:29:52.729 "ns_manage": 0 00:29:52.729 }, 00:29:52.729 "multi_ctrlr": true, 00:29:52.729 "ana_reporting": false 00:29:52.729 }, 00:29:52.729 "vs": { 00:29:52.729 "nvme_version": "1.3" 00:29:52.729 }, 00:29:52.729 "ns_data": { 00:29:52.729 "id": 1, 00:29:52.729 "can_share": true 00:29:52.729 } 00:29:52.729 } 00:29:52.729 ], 00:29:52.729 "mp_policy": "active_passive" 00:29:52.729 } 00:29:52.729 } 00:29:52.729 ] 00:29:52.729 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=353191 00:29:52.729 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:52.729 19:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:52.729 Running I/O for 10 seconds... 00:29:53.665 Latency(us) 00:29:53.665 [2024-12-06T18:28:38.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:53.665 Nvme0n1 : 1.00 16637.00 64.99 0.00 0.00 0.00 0.00 0.00 00:29:53.665 [2024-12-06T18:28:38.714Z] =================================================================================================================== 00:29:53.665 [2024-12-06T18:28:38.714Z] Total : 16637.00 64.99 0.00 0.00 0.00 0.00 0.00 00:29:53.665 00:29:54.598 19:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8073a870-b8a2-4506-b4db-38239ef008c6 00:29:54.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:54.855 Nvme0n1 : 2.00 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:29:54.855 [2024-12-06T18:28:39.904Z] =================================================================================================================== 00:29:54.855 [2024-12-06T18:28:39.904Z] Total : 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:29:54.855 00:29:54.855 true 00:29:54.855 19:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8073a870-b8a2-4506-b4db-38239ef008c6 00:29:54.855 19:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:55.420 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:55.420 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:55.420 19:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 353191 00:29:55.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:55.677 Nvme0n1 : 3.00 16721.67 65.32 0.00 0.00 0.00 0.00 0.00 00:29:55.677 [2024-12-06T18:28:40.726Z] =================================================================================================================== 00:29:55.677 [2024-12-06T18:28:40.726Z] Total : 16721.67 65.32 0.00 0.00 0.00 0.00 0.00 00:29:55.677 00:29:57.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:57.055 Nvme0n1 : 4.00 16827.50 65.73 0.00 0.00 0.00 0.00 0.00 00:29:57.055 [2024-12-06T18:28:42.104Z] =================================================================================================================== 00:29:57.055 [2024-12-06T18:28:42.104Z] Total : 16827.50 65.73 0.00 0.00 0.00 0.00 0.00 00:29:57.055 00:29:57.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:57.990 Nvme0n1 : 5.00 16903.80 66.03 0.00 0.00 0.00 0.00 0.00 00:29:57.990 [2024-12-06T18:28:43.039Z] =================================================================================================================== 00:29:57.990 [2024-12-06T18:28:43.039Z] Total : 16903.80 66.03 0.00 0.00 0.00 0.00 0.00 00:29:57.990 00:29:58.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:58.930 Nvme0n1 : 6.00 16965.17 66.27 0.00 0.00 0.00 0.00 0.00 00:29:58.930 [2024-12-06T18:28:43.979Z] =================================================================================================================== 00:29:58.930 [2024-12-06T18:28:43.979Z] Total : 16965.17 66.27 0.00 0.00 0.00 0.00 0.00 00:29:58.930 00:29:59.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:59.867 Nvme0n1 : 7.00 17018.14 66.48 0.00 0.00 0.00 0.00 0.00 00:29:59.867 [2024-12-06T18:28:44.916Z] =================================================================================================================== 00:29:59.867 [2024-12-06T18:28:44.916Z] Total : 17018.14 66.48 0.00 0.00 0.00 0.00 0.00 00:29:59.867 00:30:00.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:00.802 Nvme0n1 : 8.00 17038.25 66.56 0.00 0.00 0.00 0.00 0.00 00:30:00.802 [2024-12-06T18:28:45.851Z] =================================================================================================================== 00:30:00.802 [2024-12-06T18:28:45.851Z] Total : 17038.25 66.56 0.00 0.00 0.00 0.00 0.00 00:30:00.802 00:30:01.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:01.737 Nvme0n1 : 9.00 17043.00 66.57 0.00 0.00 0.00 0.00 0.00 00:30:01.737 [2024-12-06T18:28:46.786Z] =================================================================================================================== 00:30:01.737 [2024-12-06T18:28:46.786Z] Total : 17043.00 66.57 0.00 0.00 0.00 0.00 0.00 00:30:01.737 00:30:02.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:02.676 Nvme0n1 : 10.00 17040.50 66.56 0.00 0.00 0.00 0.00 0.00 00:30:02.676 [2024-12-06T18:28:47.725Z] =================================================================================================================== 00:30:02.676 [2024-12-06T18:28:47.725Z] Total : 17040.50 66.56 0.00 0.00 0.00 0.00 0.00 00:30:02.676 00:30:02.676 00:30:02.676 Latency(us) 00:30:02.676 [2024-12-06T18:28:47.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:02.676 Nvme0n1 : 10.01 17038.33 66.56 0.00 0.00 7507.69 3932.16 16214.09 00:30:02.676 [2024-12-06T18:28:47.725Z] =================================================================================================================== 00:30:02.676 [2024-12-06T18:28:47.725Z] Total : 17038.33 66.56 0.00 0.00 7507.69 3932.16 16214.09 00:30:02.676 { 00:30:02.676 "results": [ 00:30:02.676 { 00:30:02.676 "job": "Nvme0n1", 00:30:02.676 "core_mask": "0x2", 00:30:02.676 "workload": "randwrite", 00:30:02.676 "status": "finished", 00:30:02.676 "queue_depth": 128, 00:30:02.676 "io_size": 4096, 00:30:02.676 "runtime": 10.005087, 00:30:02.676 "iops": 17038.332600206275, 00:30:02.676 "mibps": 66.55598671955576, 00:30:02.676 "io_failed": 0, 00:30:02.676 "io_timeout": 0, 00:30:02.676 "avg_latency_us": 7507.694431421626, 00:30:02.676 "min_latency_us": 3932.16, 00:30:02.676 "max_latency_us": 16214.091851851852 00:30:02.676 } 00:30:02.676 ], 00:30:02.676 "core_count": 1 00:30:02.676 } 00:30:02.676 19:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 353059 00:30:02.676 19:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 353059 ']' 00:30:02.676 19:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 353059 00:30:02.676 19:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:02.676 19:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:02.676 19:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353059 00:30:02.937 19:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:02.937 19:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:02.937 19:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353059' 00:30:02.937 killing process with pid 353059 00:30:02.937 19:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 353059 00:30:02.937 Received shutdown signal, test time was about 10.000000 seconds 00:30:02.937 00:30:02.937 Latency(us) 00:30:02.937 [2024-12-06T18:28:47.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.937 [2024-12-06T18:28:47.986Z] =================================================================================================================== 00:30:02.937 [2024-12-06T18:28:47.986Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:02.937 19:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 353059 00:30:02.937 19:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:03.198 19:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:03.765 19:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8073a870-b8a2-4506-b4db-38239ef008c6 00:30:03.765 19:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:03.765 19:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:03.765 19:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:03.765 19:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:04.023 [2024-12-06 19:28:49.047063] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:04.284 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8073a870-b8a2-4506-b4db-38239ef008c6 00:30:04.284 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:04.284 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8073a870-b8a2-4506-b4db-38239ef008c6 00:30:04.284 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:04.284 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:04.284 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:04.284 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:04.284 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:04.284 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:04.284 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:04.284 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:04.284 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8073a870-b8a2-4506-b4db-38239ef008c6 00:30:04.568 request: 00:30:04.568 { 00:30:04.568 "uuid": "8073a870-b8a2-4506-b4db-38239ef008c6", 00:30:04.568 "method": "bdev_lvol_get_lvstores", 00:30:04.568 "req_id": 1 00:30:04.568 } 00:30:04.568 Got JSON-RPC error response 00:30:04.568 response: 00:30:04.568 { 00:30:04.568 "code": -19, 00:30:04.568 "message": "No such device" 00:30:04.568 } 00:30:04.568 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:04.568 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:04.568 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:04.568 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:04.569 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:04.841 aio_bdev 00:30:04.841 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7b9bee13-be37-4f70-a0a2-de94a5119c04 00:30:04.841 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=7b9bee13-be37-4f70-a0a2-de94a5119c04 00:30:04.841 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:04.841 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:04.841 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:04.841 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:04.841 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:05.109 19:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7b9bee13-be37-4f70-a0a2-de94a5119c04 -t 2000 00:30:05.401 [ 00:30:05.401 { 00:30:05.401 "name": "7b9bee13-be37-4f70-a0a2-de94a5119c04", 00:30:05.401 "aliases": [ 00:30:05.401 "lvs/lvol" 00:30:05.401 ], 00:30:05.401 "product_name": "Logical Volume", 00:30:05.401 "block_size": 4096, 00:30:05.401 "num_blocks": 38912, 00:30:05.401 "uuid": "7b9bee13-be37-4f70-a0a2-de94a5119c04", 00:30:05.401 "assigned_rate_limits": { 00:30:05.401 "rw_ios_per_sec": 0, 00:30:05.401 "rw_mbytes_per_sec": 0, 00:30:05.401 "r_mbytes_per_sec": 0, 00:30:05.401 "w_mbytes_per_sec": 0 00:30:05.401 }, 00:30:05.401 "claimed": false, 00:30:05.401 "zoned": false, 00:30:05.401 "supported_io_types": { 00:30:05.401 "read": true, 00:30:05.401 "write": true, 00:30:05.401 "unmap": true, 00:30:05.401 "flush": false, 00:30:05.401 "reset": true, 00:30:05.401 "nvme_admin": false, 00:30:05.401 "nvme_io": false, 00:30:05.401 "nvme_io_md": false, 00:30:05.401 "write_zeroes": true, 00:30:05.401 "zcopy": false, 00:30:05.401 "get_zone_info": false, 00:30:05.401 "zone_management": false, 00:30:05.401 "zone_append": false, 00:30:05.401 "compare": false, 00:30:05.401 "compare_and_write": false, 00:30:05.401 "abort": false, 00:30:05.401 "seek_hole": true, 00:30:05.401 "seek_data": true, 00:30:05.401 "copy": false, 00:30:05.401 "nvme_iov_md": false 00:30:05.401 }, 00:30:05.401 "driver_specific": { 00:30:05.401 "lvol": { 00:30:05.401 "lvol_store_uuid": "8073a870-b8a2-4506-b4db-38239ef008c6", 00:30:05.401 "base_bdev": "aio_bdev", 00:30:05.401 "thin_provision": false, 00:30:05.401 "num_allocated_clusters": 38, 00:30:05.401 "snapshot": false, 00:30:05.401 "clone": false, 00:30:05.401 "esnap_clone": false 00:30:05.401 } 00:30:05.401 } 00:30:05.401 } 00:30:05.401 ] 00:30:05.401 19:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:05.401 19:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8073a870-b8a2-4506-b4db-38239ef008c6 00:30:05.401 19:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:05.692 19:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:05.692 19:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8073a870-b8a2-4506-b4db-38239ef008c6 00:30:05.692 19:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:05.972 19:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:05.972 19:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7b9bee13-be37-4f70-a0a2-de94a5119c04 00:30:06.251 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8073a870-b8a2-4506-b4db-38239ef008c6 00:30:06.549 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:06.809 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:06.809 00:30:06.809 real 0m17.920s 00:30:06.809 user 0m17.476s 00:30:06.809 sys 0m1.963s 00:30:06.809 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.809 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:06.809 ************************************ 00:30:06.809 END TEST lvs_grow_clean 00:30:06.809 ************************************ 00:30:06.809 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:06.809 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:06.809 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:06.809 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:06.809 ************************************ 00:30:06.809 START TEST lvs_grow_dirty 00:30:06.809 ************************************ 00:30:06.809 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:06.809 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:06.809 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:06.809 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:06.809 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:06.809 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:06.809 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:06.809 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:06.809 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:06.809 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:07.068 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:07.068 19:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:07.326 19:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1af08222-5dab-4418-8df0-cc66c48c5389 00:30:07.326 19:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1af08222-5dab-4418-8df0-cc66c48c5389 00:30:07.326 19:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:07.582 19:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:07.582 19:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:07.582 19:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1af08222-5dab-4418-8df0-cc66c48c5389 lvol 150 00:30:07.839 19:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=511bcbb8-2be8-44e8-bcaa-366a6dec2f0a 00:30:07.839 19:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:07.839 19:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:08.099 [2024-12-06 19:28:53.051053] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:08.099 [2024-12-06 19:28:53.051163] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:08.099 true 00:30:08.099 19:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1af08222-5dab-4418-8df0-cc66c48c5389 00:30:08.100 19:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:08.359 19:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:08.359 19:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:08.619 19:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 511bcbb8-2be8-44e8-bcaa-366a6dec2f0a 00:30:08.878 19:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:09.137 [2024-12-06 19:28:54.139350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.137 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:09.395 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=355226 00:30:09.395 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:09.395 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:09.395 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 355226 /var/tmp/bdevperf.sock 00:30:09.395 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 355226 ']' 00:30:09.395 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:09.395 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.395 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:09.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:09.395 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.395 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:09.652 [2024-12-06 19:28:54.465085] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:30:09.652 [2024-12-06 19:28:54.465167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355226 ] 00:30:09.652 [2024-12-06 19:28:54.535752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.652 [2024-12-06 19:28:54.594398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.652 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.652 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:09.652 19:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:10.216 Nvme0n1 00:30:10.216 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:10.473 [ 00:30:10.473 { 00:30:10.473 "name": "Nvme0n1", 00:30:10.473 "aliases": [ 00:30:10.473 "511bcbb8-2be8-44e8-bcaa-366a6dec2f0a" 00:30:10.473 ], 00:30:10.473 "product_name": "NVMe disk", 00:30:10.473 "block_size": 4096, 00:30:10.473 "num_blocks": 38912, 00:30:10.473 "uuid": "511bcbb8-2be8-44e8-bcaa-366a6dec2f0a", 00:30:10.473 "numa_id": 1, 00:30:10.473 "assigned_rate_limits": { 00:30:10.473 "rw_ios_per_sec": 0, 00:30:10.473 "rw_mbytes_per_sec": 0, 00:30:10.473 "r_mbytes_per_sec": 0, 00:30:10.473 "w_mbytes_per_sec": 0 00:30:10.473 }, 00:30:10.473 "claimed": false, 00:30:10.473 "zoned": false, 00:30:10.473 "supported_io_types": { 00:30:10.473 "read": true, 00:30:10.473 "write": true, 00:30:10.473 "unmap": true, 00:30:10.473 "flush": true, 00:30:10.473 "reset": true, 00:30:10.473 "nvme_admin": true, 00:30:10.473 "nvme_io": true, 00:30:10.473 "nvme_io_md": false, 00:30:10.473 "write_zeroes": true, 00:30:10.473 "zcopy": false, 00:30:10.473 "get_zone_info": false, 00:30:10.473 "zone_management": false, 00:30:10.473 "zone_append": false, 00:30:10.473 "compare": true, 00:30:10.473 "compare_and_write": true, 00:30:10.473 "abort": true, 00:30:10.473 "seek_hole": false, 00:30:10.473 "seek_data": false, 00:30:10.474 "copy": true, 00:30:10.474 "nvme_iov_md": false 00:30:10.474 }, 00:30:10.474 "memory_domains": [ 00:30:10.474 { 00:30:10.474 "dma_device_id": "system", 00:30:10.474 "dma_device_type": 1 00:30:10.474 } 00:30:10.474 ], 00:30:10.474 "driver_specific": { 00:30:10.474 "nvme": [ 00:30:10.474 { 00:30:10.474 "trid": { 00:30:10.474 "trtype": "TCP", 00:30:10.474 "adrfam": "IPv4", 00:30:10.474 "traddr": "10.0.0.2", 00:30:10.474 "trsvcid": "4420", 00:30:10.474 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:10.474 }, 00:30:10.474 "ctrlr_data": { 00:30:10.474 "cntlid": 1, 00:30:10.474 "vendor_id": "0x8086", 00:30:10.474 "model_number": "SPDK bdev Controller", 00:30:10.474 "serial_number": "SPDK0", 00:30:10.474 "firmware_revision": "25.01", 00:30:10.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:10.474 "oacs": { 00:30:10.474 "security": 0, 00:30:10.474 "format": 0, 00:30:10.474 "firmware": 0, 00:30:10.474 "ns_manage": 0 00:30:10.474 }, 00:30:10.474 "multi_ctrlr": true, 00:30:10.474 "ana_reporting": false 00:30:10.474 }, 00:30:10.474 "vs": { 00:30:10.474 "nvme_version": "1.3" 00:30:10.474 }, 00:30:10.474 "ns_data": { 00:30:10.474 "id": 1, 00:30:10.474 "can_share": true 00:30:10.474 } 00:30:10.474 } 00:30:10.474 ], 00:30:10.474 "mp_policy": "active_passive" 00:30:10.474 } 00:30:10.474 } 00:30:10.474 ] 00:30:10.474 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=355362 00:30:10.474 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:10.474 19:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:10.732 Running I/O for 10 seconds... 00:30:11.668 Latency(us) 00:30:11.668 [2024-12-06T18:28:56.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:11.668 Nvme0n1 : 1.00 16510.00 64.49 0.00 0.00 0.00 0.00 0.00 00:30:11.668 [2024-12-06T18:28:56.717Z] =================================================================================================================== 00:30:11.668 [2024-12-06T18:28:56.717Z] Total : 16510.00 64.49 0.00 0.00 0.00 0.00 0.00 00:30:11.668 00:30:12.603 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1af08222-5dab-4418-8df0-cc66c48c5389 00:30:12.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:12.603 Nvme0n1 : 2.00 16573.50 64.74 0.00 0.00 0.00 0.00 0.00 00:30:12.603 [2024-12-06T18:28:57.652Z] =================================================================================================================== 00:30:12.603 [2024-12-06T18:28:57.652Z] Total : 16573.50 64.74 0.00 0.00 0.00 0.00 0.00 00:30:12.603 00:30:12.861 true 00:30:12.861 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1af08222-5dab-4418-8df0-cc66c48c5389 00:30:12.861 19:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:13.119 19:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:13.119 19:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:13.119 19:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 355362 00:30:13.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.686 Nvme0n1 : 3.00 16679.33 65.15 0.00 0.00 0.00 0.00 0.00 00:30:13.686 [2024-12-06T18:28:58.735Z] =================================================================================================================== 00:30:13.686 [2024-12-06T18:28:58.735Z] Total : 16679.33 65.15 0.00 0.00 0.00 0.00 0.00 00:30:13.686 00:30:14.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:14.621 Nvme0n1 : 4.00 16780.00 65.55 0.00 0.00 0.00 0.00 0.00 00:30:14.621 [2024-12-06T18:28:59.670Z] =================================================================================================================== 00:30:14.621 [2024-12-06T18:28:59.670Z] Total : 16780.00 65.55 0.00 0.00 0.00 0.00 0.00 00:30:14.621 00:30:15.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.561 Nvme0n1 : 5.00 16847.00 65.81 0.00 0.00 0.00 0.00 0.00 00:30:15.561 [2024-12-06T18:29:00.610Z] =================================================================================================================== 00:30:15.561 [2024-12-06T18:29:00.610Z] Total : 16847.00 65.81 0.00 0.00 0.00 0.00 0.00 00:30:15.561 00:30:16.939 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:16.939 Nvme0n1 : 6.00 16896.67 66.00 0.00 0.00 0.00 0.00 0.00 00:30:16.939 [2024-12-06T18:29:01.988Z] =================================================================================================================== 00:30:16.939 [2024-12-06T18:29:01.988Z] Total : 16896.67 66.00 0.00 0.00 0.00 0.00 0.00 00:30:16.939 00:30:17.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:17.875 Nvme0n1 : 7.00 16950.29 66.21 0.00 0.00 0.00 0.00 0.00 00:30:17.875 [2024-12-06T18:29:02.924Z] =================================================================================================================== 00:30:17.875 [2024-12-06T18:29:02.924Z] Total : 16950.29 66.21 0.00 0.00 0.00 0.00 0.00 00:30:17.875 00:30:18.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:18.809 Nvme0n1 : 8.00 17006.38 66.43 0.00 0.00 0.00 0.00 0.00 00:30:18.809 [2024-12-06T18:29:03.858Z] =================================================================================================================== 00:30:18.809 [2024-12-06T18:29:03.858Z] Total : 17006.38 66.43 0.00 0.00 0.00 0.00 0.00 00:30:18.809 00:30:19.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:19.748 Nvme0n1 : 9.00 17035.89 66.55 0.00 0.00 0.00 0.00 0.00 00:30:19.748 [2024-12-06T18:29:04.797Z] =================================================================================================================== 00:30:19.748 [2024-12-06T18:29:04.797Z] Total : 17035.89 66.55 0.00 0.00 0.00 0.00 0.00 00:30:19.748 00:30:20.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.687 Nvme0n1 : 10.00 17002.40 66.42 0.00 0.00 0.00 0.00 0.00 00:30:20.687 [2024-12-06T18:29:05.736Z] =================================================================================================================== 00:30:20.687 [2024-12-06T18:29:05.736Z] Total : 17002.40 66.42 0.00 0.00 0.00 0.00 0.00 00:30:20.687 00:30:20.687 00:30:20.687 Latency(us) 00:30:20.687 [2024-12-06T18:29:05.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.687 Nvme0n1 : 10.01 17007.40 66.44 0.00 0.00 7521.97 3932.16 17185.00 00:30:20.687 [2024-12-06T18:29:05.736Z] =================================================================================================================== 00:30:20.687 [2024-12-06T18:29:05.736Z] Total : 17007.40 66.44 0.00 0.00 7521.97 3932.16 17185.00 00:30:20.687 { 00:30:20.687 "results": [ 00:30:20.687 { 00:30:20.687 "job": "Nvme0n1", 00:30:20.687 "core_mask": "0x2", 00:30:20.687 "workload": "randwrite", 00:30:20.687 "status": "finished", 00:30:20.687 "queue_depth": 128, 00:30:20.687 "io_size": 4096, 00:30:20.687 "runtime": 10.008292, 00:30:20.687 "iops": 17007.397466021175, 00:30:20.687 "mibps": 66.43514635164522, 00:30:20.687 "io_failed": 0, 00:30:20.687 "io_timeout": 0, 00:30:20.687 "avg_latency_us": 7521.9728458278805, 00:30:20.687 "min_latency_us": 3932.16, 00:30:20.687 "max_latency_us": 17184.995555555557 00:30:20.687 } 00:30:20.687 ], 00:30:20.687 "core_count": 1 00:30:20.687 } 00:30:20.687 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 355226 00:30:20.687 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 355226 ']' 00:30:20.687 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 355226 00:30:20.687 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:20.687 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:20.687 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 355226 00:30:20.687 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:20.687 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:20.687 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 355226' 00:30:20.687 killing process with pid 355226 00:30:20.687 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 355226 00:30:20.687 Received shutdown signal, test time was about 10.000000 seconds 00:30:20.687 00:30:20.687 Latency(us) 00:30:20.687 [2024-12-06T18:29:05.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.687 [2024-12-06T18:29:05.736Z] =================================================================================================================== 00:30:20.687 [2024-12-06T18:29:05.736Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:20.687 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 355226 00:30:20.948 19:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:21.207 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:21.467 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1af08222-5dab-4418-8df0-cc66c48c5389 00:30:21.467 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 352620 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 352620 00:30:21.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 352620 Killed "${NVMF_APP[@]}" "$@" 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=356688 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 356688 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 356688 ']' 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:21.740 19:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:21.998 [2024-12-06 19:29:06.799206] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:21.998 [2024-12-06 19:29:06.800246] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:30:21.998 [2024-12-06 19:29:06.800312] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.998 [2024-12-06 19:29:06.871842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.998 [2024-12-06 19:29:06.927713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.998 [2024-12-06 19:29:06.927789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.998 [2024-12-06 19:29:06.927811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.998 [2024-12-06 19:29:06.927823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.998 [2024-12-06 19:29:06.927832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.998 [2024-12-06 19:29:06.928434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.998 [2024-12-06 19:29:07.012801] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:21.998 [2024-12-06 19:29:07.013097] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:21.998 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.998 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:21.998 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:21.998 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:21.998 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:22.256 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.256 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:22.516 [2024-12-06 19:29:07.319136] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:22.516 [2024-12-06 19:29:07.319281] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:22.516 [2024-12-06 19:29:07.319328] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:22.516 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:22.516 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 511bcbb8-2be8-44e8-bcaa-366a6dec2f0a 00:30:22.516 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=511bcbb8-2be8-44e8-bcaa-366a6dec2f0a 00:30:22.516 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:22.516 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:22.516 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:22.516 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:22.516 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:22.776 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 511bcbb8-2be8-44e8-bcaa-366a6dec2f0a -t 2000 00:30:23.038 [ 00:30:23.038 { 00:30:23.038 "name": "511bcbb8-2be8-44e8-bcaa-366a6dec2f0a", 00:30:23.038 "aliases": [ 00:30:23.038 "lvs/lvol" 00:30:23.038 ], 00:30:23.038 "product_name": "Logical Volume", 00:30:23.038 "block_size": 4096, 00:30:23.038 "num_blocks": 38912, 00:30:23.038 "uuid": "511bcbb8-2be8-44e8-bcaa-366a6dec2f0a", 00:30:23.038 "assigned_rate_limits": { 00:30:23.038 "rw_ios_per_sec": 0, 00:30:23.038 "rw_mbytes_per_sec": 0, 00:30:23.038 "r_mbytes_per_sec": 0, 00:30:23.038 "w_mbytes_per_sec": 0 00:30:23.038 }, 00:30:23.038 "claimed": false, 00:30:23.038 "zoned": false, 00:30:23.038 "supported_io_types": { 00:30:23.038 "read": true, 00:30:23.038 "write": true, 00:30:23.038 "unmap": true, 00:30:23.038 "flush": false, 00:30:23.038 "reset": true, 00:30:23.038 "nvme_admin": false, 00:30:23.038 "nvme_io": false, 00:30:23.038 "nvme_io_md": false, 00:30:23.038 "write_zeroes": true, 00:30:23.038 "zcopy": false, 00:30:23.038 "get_zone_info": false, 00:30:23.038 "zone_management": false, 00:30:23.038 "zone_append": false, 00:30:23.038 "compare": false, 00:30:23.038 "compare_and_write": false, 00:30:23.038 "abort": false, 00:30:23.038 "seek_hole": true, 00:30:23.038 "seek_data": true, 00:30:23.038 "copy": false, 00:30:23.038 "nvme_iov_md": false 00:30:23.038 }, 00:30:23.038 "driver_specific": { 00:30:23.038 "lvol": { 00:30:23.038 "lvol_store_uuid": "1af08222-5dab-4418-8df0-cc66c48c5389", 00:30:23.038 "base_bdev": "aio_bdev", 00:30:23.038 "thin_provision": false, 00:30:23.038 "num_allocated_clusters": 38, 00:30:23.038 "snapshot": false, 00:30:23.038 "clone": false, 00:30:23.038 "esnap_clone": false 00:30:23.038 } 00:30:23.038 } 00:30:23.038 } 00:30:23.038 ] 00:30:23.038 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:23.038 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1af08222-5dab-4418-8df0-cc66c48c5389 00:30:23.038 19:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:23.301 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:23.301 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1af08222-5dab-4418-8df0-cc66c48c5389 00:30:23.301 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:23.562 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:23.562 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:23.821 [2024-12-06 19:29:08.688983] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:23.821 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1af08222-5dab-4418-8df0-cc66c48c5389 00:30:23.821 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:23.821 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1af08222-5dab-4418-8df0-cc66c48c5389 00:30:23.821 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.821 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:23.821 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.821 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:23.821 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.821 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:23.821 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.821 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:23.821 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1af08222-5dab-4418-8df0-cc66c48c5389 00:30:24.081 request: 00:30:24.081 { 00:30:24.081 "uuid": "1af08222-5dab-4418-8df0-cc66c48c5389", 00:30:24.081 "method": "bdev_lvol_get_lvstores", 00:30:24.081 "req_id": 1 00:30:24.081 } 00:30:24.081 Got JSON-RPC error response 00:30:24.081 response: 00:30:24.081 { 00:30:24.081 "code": -19, 00:30:24.081 "message": "No such device" 00:30:24.081 } 00:30:24.081 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:24.081 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:24.081 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:24.081 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:24.081 19:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:24.342 aio_bdev 00:30:24.342 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 511bcbb8-2be8-44e8-bcaa-366a6dec2f0a 00:30:24.342 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=511bcbb8-2be8-44e8-bcaa-366a6dec2f0a 00:30:24.342 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:24.342 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:24.342 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:24.342 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:24.342 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:24.604 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 511bcbb8-2be8-44e8-bcaa-366a6dec2f0a -t 2000 00:30:24.864 [ 00:30:24.864 { 00:30:24.864 "name": "511bcbb8-2be8-44e8-bcaa-366a6dec2f0a", 00:30:24.864 "aliases": [ 00:30:24.864 "lvs/lvol" 00:30:24.864 ], 00:30:24.864 "product_name": "Logical Volume", 00:30:24.864 "block_size": 4096, 00:30:24.864 "num_blocks": 38912, 00:30:24.864 "uuid": "511bcbb8-2be8-44e8-bcaa-366a6dec2f0a", 00:30:24.864 "assigned_rate_limits": { 00:30:24.864 "rw_ios_per_sec": 0, 00:30:24.864 "rw_mbytes_per_sec": 0, 00:30:24.864 "r_mbytes_per_sec": 0, 00:30:24.864 "w_mbytes_per_sec": 0 00:30:24.864 }, 00:30:24.864 "claimed": false, 00:30:24.864 "zoned": false, 00:30:24.864 "supported_io_types": { 00:30:24.864 "read": true, 00:30:24.864 "write": true, 00:30:24.864 "unmap": true, 00:30:24.864 "flush": false, 00:30:24.864 "reset": true, 00:30:24.864 "nvme_admin": false, 00:30:24.864 "nvme_io": false, 00:30:24.864 "nvme_io_md": false, 00:30:24.864 "write_zeroes": true, 00:30:24.864 "zcopy": false, 00:30:24.864 "get_zone_info": false, 00:30:24.864 "zone_management": false, 00:30:24.864 "zone_append": false, 00:30:24.864 "compare": false, 00:30:24.864 "compare_and_write": false, 00:30:24.864 "abort": false, 00:30:24.864 "seek_hole": true, 00:30:24.864 "seek_data": true, 00:30:24.864 "copy": false, 00:30:24.864 "nvme_iov_md": false 00:30:24.864 }, 00:30:24.864 "driver_specific": { 00:30:24.864 "lvol": { 00:30:24.864 "lvol_store_uuid": "1af08222-5dab-4418-8df0-cc66c48c5389", 00:30:24.864 "base_bdev": "aio_bdev", 00:30:24.864 "thin_provision": false, 00:30:24.864 "num_allocated_clusters": 38, 00:30:24.864 "snapshot": false, 00:30:24.864 "clone": false, 00:30:24.864 "esnap_clone": false 00:30:24.864 } 00:30:24.864 } 00:30:24.864 } 00:30:24.864 ] 00:30:24.864 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:24.864 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1af08222-5dab-4418-8df0-cc66c48c5389 00:30:24.864 19:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:25.124 19:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:25.124 19:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1af08222-5dab-4418-8df0-cc66c48c5389 00:30:25.124 19:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:25.385 19:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:25.385 19:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 511bcbb8-2be8-44e8-bcaa-366a6dec2f0a 00:30:25.644 19:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1af08222-5dab-4418-8df0-cc66c48c5389 00:30:25.907 19:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:26.472 00:30:26.472 real 0m19.567s 00:30:26.472 user 0m36.342s 00:30:26.472 sys 0m5.001s 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:26.472 ************************************ 00:30:26.472 END TEST lvs_grow_dirty 00:30:26.472 ************************************ 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:26.472 nvmf_trace.0 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:26.472 rmmod nvme_tcp 00:30:26.472 rmmod nvme_fabrics 00:30:26.472 rmmod nvme_keyring 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 356688 ']' 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 356688 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 356688 ']' 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 356688 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 356688 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 356688' 00:30:26.472 killing process with pid 356688 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 356688 00:30:26.472 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 356688 00:30:26.732 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:26.732 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:26.732 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:26.732 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:26.733 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:26.733 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:26.733 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:26.733 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:26.733 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:26.733 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.733 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.733 19:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.631 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:28.631 00:30:28.631 real 0m42.831s 00:30:28.631 user 0m55.499s 00:30:28.631 sys 0m8.914s 00:30:28.631 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:28.631 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:28.631 ************************************ 00:30:28.631 END TEST nvmf_lvs_grow 00:30:28.631 ************************************ 00:30:28.631 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:28.631 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:28.631 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:28.631 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:28.890 ************************************ 00:30:28.890 START TEST nvmf_bdev_io_wait 00:30:28.890 ************************************ 00:30:28.890 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:28.890 * Looking for test storage... 00:30:28.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:28.890 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:28.890 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:30:28.890 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:28.890 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:28.890 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:28.890 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:28.890 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:28.890 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:28.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.891 --rc genhtml_branch_coverage=1 00:30:28.891 --rc genhtml_function_coverage=1 00:30:28.891 --rc genhtml_legend=1 00:30:28.891 --rc geninfo_all_blocks=1 00:30:28.891 --rc geninfo_unexecuted_blocks=1 00:30:28.891 00:30:28.891 ' 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:28.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.891 --rc genhtml_branch_coverage=1 00:30:28.891 --rc genhtml_function_coverage=1 00:30:28.891 --rc genhtml_legend=1 00:30:28.891 --rc geninfo_all_blocks=1 00:30:28.891 --rc geninfo_unexecuted_blocks=1 00:30:28.891 00:30:28.891 ' 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:28.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.891 --rc genhtml_branch_coverage=1 00:30:28.891 --rc genhtml_function_coverage=1 00:30:28.891 --rc genhtml_legend=1 00:30:28.891 --rc geninfo_all_blocks=1 00:30:28.891 --rc geninfo_unexecuted_blocks=1 00:30:28.891 00:30:28.891 ' 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:28.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.891 --rc genhtml_branch_coverage=1 00:30:28.891 --rc genhtml_function_coverage=1 00:30:28.891 --rc genhtml_legend=1 00:30:28.891 --rc geninfo_all_blocks=1 00:30:28.891 --rc geninfo_unexecuted_blocks=1 00:30:28.891 00:30:28.891 ' 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.891 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:28.892 19:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.423 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:31.424 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:31.424 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:31.424 Found net devices under 0000:84:00.0: cvl_0_0 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:31.424 Found net devices under 0000:84:00.1: cvl_0_1 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:31.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:30:31.424 00:30:31.424 --- 10.0.0.2 ping statistics --- 00:30:31.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.424 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:30:31.424 00:30:31.424 --- 10.0.0.1 ping statistics --- 00:30:31.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.424 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:31.424 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=359223 00:30:31.425 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:31.425 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 359223 00:30:31.425 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 359223 ']' 00:30:31.425 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.425 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.425 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.425 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.425 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:31.425 [2024-12-06 19:29:16.333627] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:31.425 [2024-12-06 19:29:16.334704] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:30:31.425 [2024-12-06 19:29:16.334768] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.425 [2024-12-06 19:29:16.405040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:31.425 [2024-12-06 19:29:16.459982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.425 [2024-12-06 19:29:16.460051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.425 [2024-12-06 19:29:16.460079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.425 [2024-12-06 19:29:16.460090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.425 [2024-12-06 19:29:16.460099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.425 [2024-12-06 19:29:16.461731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.425 [2024-12-06 19:29:16.461788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:31.425 [2024-12-06 19:29:16.461852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:31.425 [2024-12-06 19:29:16.461855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.425 [2024-12-06 19:29:16.462373] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:31.685 [2024-12-06 19:29:16.653272] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:31.685 [2024-12-06 19:29:16.653490] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:31.685 [2024-12-06 19:29:16.654394] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:31.685 [2024-12-06 19:29:16.655251] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:31.685 [2024-12-06 19:29:16.662608] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:31.685 Malloc0 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:31.685 [2024-12-06 19:29:16.722821] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.685 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=359247 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=359248 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=359250 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:31.686 { 00:30:31.686 "params": { 00:30:31.686 "name": "Nvme$subsystem", 00:30:31.686 "trtype": "$TEST_TRANSPORT", 00:30:31.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.686 "adrfam": "ipv4", 00:30:31.686 "trsvcid": "$NVMF_PORT", 00:30:31.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.686 "hdgst": ${hdgst:-false}, 00:30:31.686 "ddgst": ${ddgst:-false} 00:30:31.686 }, 00:30:31.686 "method": "bdev_nvme_attach_controller" 00:30:31.686 } 00:30:31.686 EOF 00:30:31.686 )") 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=359253 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:31.686 { 00:30:31.686 "params": { 00:30:31.686 "name": "Nvme$subsystem", 00:30:31.686 "trtype": "$TEST_TRANSPORT", 00:30:31.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.686 "adrfam": "ipv4", 00:30:31.686 "trsvcid": "$NVMF_PORT", 00:30:31.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.686 "hdgst": ${hdgst:-false}, 00:30:31.686 "ddgst": ${ddgst:-false} 00:30:31.686 }, 00:30:31.686 "method": "bdev_nvme_attach_controller" 00:30:31.686 } 00:30:31.686 EOF 00:30:31.686 )") 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:31.686 { 00:30:31.686 "params": { 00:30:31.686 "name": "Nvme$subsystem", 00:30:31.686 "trtype": "$TEST_TRANSPORT", 00:30:31.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.686 "adrfam": "ipv4", 00:30:31.686 "trsvcid": "$NVMF_PORT", 00:30:31.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.686 "hdgst": ${hdgst:-false}, 00:30:31.686 "ddgst": ${ddgst:-false} 00:30:31.686 }, 00:30:31.686 "method": "bdev_nvme_attach_controller" 00:30:31.686 } 00:30:31.686 EOF 00:30:31.686 )") 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:31.686 { 00:30:31.686 "params": { 00:30:31.686 "name": "Nvme$subsystem", 00:30:31.686 "trtype": "$TEST_TRANSPORT", 00:30:31.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.686 "adrfam": "ipv4", 00:30:31.686 "trsvcid": "$NVMF_PORT", 00:30:31.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.686 "hdgst": ${hdgst:-false}, 00:30:31.686 "ddgst": ${ddgst:-false} 00:30:31.686 }, 00:30:31.686 "method": "bdev_nvme_attach_controller" 00:30:31.686 } 00:30:31.686 EOF 00:30:31.686 )") 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 359247 00:30:31.686 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:31.945 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:31.945 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:31.945 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:31.945 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:31.945 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:31.945 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:31.945 "params": { 00:30:31.945 "name": "Nvme1", 00:30:31.945 "trtype": "tcp", 00:30:31.945 "traddr": "10.0.0.2", 00:30:31.945 "adrfam": "ipv4", 00:30:31.945 "trsvcid": "4420", 00:30:31.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:31.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:31.945 "hdgst": false, 00:30:31.946 "ddgst": false 00:30:31.946 }, 00:30:31.946 "method": "bdev_nvme_attach_controller" 00:30:31.946 }' 00:30:31.946 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:31.946 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:31.946 "params": { 00:30:31.946 "name": "Nvme1", 00:30:31.946 "trtype": "tcp", 00:30:31.946 "traddr": "10.0.0.2", 00:30:31.946 "adrfam": "ipv4", 00:30:31.946 "trsvcid": "4420", 00:30:31.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:31.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:31.946 "hdgst": false, 00:30:31.946 "ddgst": false 00:30:31.946 }, 00:30:31.946 "method": "bdev_nvme_attach_controller" 00:30:31.946 }' 00:30:31.946 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:31.946 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:31.946 "params": { 00:30:31.946 "name": "Nvme1", 00:30:31.946 "trtype": "tcp", 00:30:31.946 "traddr": "10.0.0.2", 00:30:31.946 "adrfam": "ipv4", 00:30:31.946 "trsvcid": "4420", 00:30:31.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:31.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:31.946 "hdgst": false, 00:30:31.946 "ddgst": false 00:30:31.946 }, 00:30:31.946 "method": "bdev_nvme_attach_controller" 00:30:31.946 }' 00:30:31.946 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:31.946 19:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:31.946 "params": { 00:30:31.946 "name": "Nvme1", 00:30:31.946 "trtype": "tcp", 00:30:31.946 "traddr": "10.0.0.2", 00:30:31.946 "adrfam": "ipv4", 00:30:31.946 "trsvcid": "4420", 00:30:31.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:31.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:31.946 "hdgst": false, 00:30:31.946 "ddgst": false 00:30:31.946 }, 00:30:31.946 "method": "bdev_nvme_attach_controller" 00:30:31.946 }' 00:30:31.946 [2024-12-06 19:29:16.775240] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:30:31.946 [2024-12-06 19:29:16.775282] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:30:31.946 [2024-12-06 19:29:16.775281] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:30:31.946 [2024-12-06 19:29:16.775281] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:30:31.946 [2024-12-06 19:29:16.775333] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:31.946 [2024-12-06 19:29:16.775367] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-06 19:29:16.775367] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-06 19:29:16.775368] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:31.946 --proc-type=auto ] 00:30:31.946 --proc-type=auto ] 00:30:31.946 [2024-12-06 19:29:16.968645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.205 [2024-12-06 19:29:17.023802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:32.205 [2024-12-06 19:29:17.072180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.205 [2024-12-06 19:29:17.127316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:32.205 [2024-12-06 19:29:17.173984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.205 [2024-12-06 19:29:17.226084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:32.205 [2024-12-06 19:29:17.238893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.463 [2024-12-06 19:29:17.290312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:32.463 Running I/O for 1 seconds... 00:30:32.463 Running I/O for 1 seconds... 00:30:32.463 Running I/O for 1 seconds... 00:30:32.463 Running I/O for 1 seconds... 00:30:33.398 189960.00 IOPS, 742.03 MiB/s 00:30:33.398 Latency(us) 00:30:33.398 [2024-12-06T18:29:18.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.398 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:33.398 Nvme1n1 : 1.00 189605.85 740.65 0.00 0.00 671.41 286.72 1844.72 00:30:33.398 [2024-12-06T18:29:18.447Z] =================================================================================================================== 00:30:33.398 [2024-12-06T18:29:18.447Z] Total : 189605.85 740.65 0.00 0.00 671.41 286.72 1844.72 00:30:33.398 11840.00 IOPS, 46.25 MiB/s 00:30:33.398 Latency(us) 00:30:33.398 [2024-12-06T18:29:18.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.398 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:33.398 Nvme1n1 : 1.01 11907.44 46.51 0.00 0.00 10712.86 2184.53 12524.66 00:30:33.398 [2024-12-06T18:29:18.447Z] =================================================================================================================== 00:30:33.398 [2024-12-06T18:29:18.447Z] Total : 11907.44 46.51 0.00 0.00 10712.86 2184.53 12524.66 00:30:33.658 8696.00 IOPS, 33.97 MiB/s 00:30:33.658 Latency(us) 00:30:33.658 [2024-12-06T18:29:18.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.658 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:33.658 Nvme1n1 : 1.01 8747.08 34.17 0.00 0.00 14563.77 5000.15 18835.53 00:30:33.658 [2024-12-06T18:29:18.707Z] =================================================================================================================== 00:30:33.658 [2024-12-06T18:29:18.707Z] Total : 8747.08 34.17 0.00 0.00 14563.77 5000.15 18835.53 00:30:33.658 8368.00 IOPS, 32.69 MiB/s 00:30:33.658 Latency(us) 00:30:33.658 [2024-12-06T18:29:18.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.658 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:33.658 Nvme1n1 : 1.01 8447.56 33.00 0.00 0.00 15095.01 2002.49 21262.79 00:30:33.658 [2024-12-06T18:29:18.707Z] =================================================================================================================== 00:30:33.658 [2024-12-06T18:29:18.707Z] Total : 8447.56 33.00 0.00 0.00 15095.01 2002.49 21262.79 00:30:33.658 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 359248 00:30:33.658 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 359250 00:30:33.658 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 359253 00:30:33.658 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:33.658 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.658 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:33.658 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.658 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:33.658 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:33.658 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:33.658 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:33.658 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.658 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:33.658 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.658 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.917 rmmod nvme_tcp 00:30:33.917 rmmod nvme_fabrics 00:30:33.917 rmmod nvme_keyring 00:30:33.917 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:33.917 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:33.917 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:33.917 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 359223 ']' 00:30:33.917 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 359223 00:30:33.917 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 359223 ']' 00:30:33.917 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 359223 00:30:33.917 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:33.917 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:33.917 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 359223 00:30:33.917 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:33.917 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:33.917 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 359223' 00:30:33.917 killing process with pid 359223 00:30:33.917 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 359223 00:30:33.917 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 359223 00:30:34.175 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:34.175 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:34.175 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:34.175 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:34.175 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:34.175 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:34.175 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:34.175 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:34.175 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:34.175 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.175 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.175 19:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.079 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:36.079 00:30:36.079 real 0m7.352s 00:30:36.079 user 0m14.346s 00:30:36.079 sys 0m4.157s 00:30:36.079 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:36.079 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:36.079 ************************************ 00:30:36.079 END TEST nvmf_bdev_io_wait 00:30:36.079 ************************************ 00:30:36.079 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:36.079 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:36.079 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:36.079 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:36.079 ************************************ 00:30:36.079 START TEST nvmf_queue_depth 00:30:36.079 ************************************ 00:30:36.079 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:36.339 * Looking for test storage... 00:30:36.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:36.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.339 --rc genhtml_branch_coverage=1 00:30:36.339 --rc genhtml_function_coverage=1 00:30:36.339 --rc genhtml_legend=1 00:30:36.339 --rc geninfo_all_blocks=1 00:30:36.339 --rc geninfo_unexecuted_blocks=1 00:30:36.339 00:30:36.339 ' 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:36.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.339 --rc genhtml_branch_coverage=1 00:30:36.339 --rc genhtml_function_coverage=1 00:30:36.339 --rc genhtml_legend=1 00:30:36.339 --rc geninfo_all_blocks=1 00:30:36.339 --rc geninfo_unexecuted_blocks=1 00:30:36.339 00:30:36.339 ' 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:36.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.339 --rc genhtml_branch_coverage=1 00:30:36.339 --rc genhtml_function_coverage=1 00:30:36.339 --rc genhtml_legend=1 00:30:36.339 --rc geninfo_all_blocks=1 00:30:36.339 --rc geninfo_unexecuted_blocks=1 00:30:36.339 00:30:36.339 ' 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:36.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.339 --rc genhtml_branch_coverage=1 00:30:36.339 --rc genhtml_function_coverage=1 00:30:36.339 --rc genhtml_legend=1 00:30:36.339 --rc geninfo_all_blocks=1 00:30:36.339 --rc geninfo_unexecuted_blocks=1 00:30:36.339 00:30:36.339 ' 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.339 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.340 19:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:38.878 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:38.878 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.878 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:38.879 Found net devices under 0000:84:00.0: cvl_0_0 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:38.879 Found net devices under 0000:84:00.1: cvl_0_1 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:38.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:30:38.879 00:30:38.879 --- 10.0.0.2 ping statistics --- 00:30:38.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.879 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:38.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:30:38.879 00:30:38.879 --- 10.0.0.1 ping statistics --- 00:30:38.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.879 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=361491 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 361491 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 361491 ']' 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.879 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:38.880 [2024-12-06 19:29:23.595100] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:38.880 [2024-12-06 19:29:23.596183] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:30:38.880 [2024-12-06 19:29:23.596237] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.880 [2024-12-06 19:29:23.673188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.880 [2024-12-06 19:29:23.734394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:38.880 [2024-12-06 19:29:23.734450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:38.880 [2024-12-06 19:29:23.734474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:38.880 [2024-12-06 19:29:23.734485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:38.880 [2024-12-06 19:29:23.734495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:38.880 [2024-12-06 19:29:23.735226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.880 [2024-12-06 19:29:23.822858] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:38.880 [2024-12-06 19:29:23.823166] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:38.880 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:38.880 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:38.880 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:38.880 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:38.880 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:38.880 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:38.880 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:38.880 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.880 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:38.880 [2024-12-06 19:29:23.883919] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:38.880 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.880 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:38.880 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.880 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:38.880 Malloc0 00:30:38.880 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.880 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:38.880 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.880 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.139 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.139 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:39.139 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.139 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.139 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.139 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:39.139 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.139 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.139 [2024-12-06 19:29:23.944027] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.139 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.139 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=361638 00:30:39.139 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:39.139 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:39.140 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 361638 /var/tmp/bdevperf.sock 00:30:39.140 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 361638 ']' 00:30:39.140 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:39.140 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:39.140 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:39.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:39.140 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:39.140 19:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.140 [2024-12-06 19:29:23.990672] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:30:39.140 [2024-12-06 19:29:23.990767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361638 ] 00:30:39.140 [2024-12-06 19:29:24.056595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.140 [2024-12-06 19:29:24.113479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.401 19:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.401 19:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:39.401 19:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:39.401 19:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.401 19:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.662 NVMe0n1 00:30:39.662 19:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.662 19:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:39.662 Running I/O for 10 seconds... 00:30:41.976 8574.00 IOPS, 33.49 MiB/s [2024-12-06T18:29:27.959Z] 8987.00 IOPS, 35.11 MiB/s [2024-12-06T18:29:28.898Z] 9198.00 IOPS, 35.93 MiB/s [2024-12-06T18:29:29.833Z] 9221.25 IOPS, 36.02 MiB/s [2024-12-06T18:29:30.769Z] 9316.80 IOPS, 36.39 MiB/s [2024-12-06T18:29:31.705Z] 9388.83 IOPS, 36.68 MiB/s [2024-12-06T18:29:33.085Z] 9367.71 IOPS, 36.59 MiB/s [2024-12-06T18:29:33.654Z] 9351.12 IOPS, 36.53 MiB/s [2024-12-06T18:29:35.052Z] 9396.67 IOPS, 36.71 MiB/s [2024-12-06T18:29:35.052Z] 9414.50 IOPS, 36.78 MiB/s 00:30:50.003 Latency(us) 00:30:50.003 [2024-12-06T18:29:35.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.003 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:50.003 Verification LBA range: start 0x0 length 0x4000 00:30:50.003 NVMe0n1 : 10.09 9424.76 36.82 0.00 0.00 108211.55 20680.25 69128.34 00:30:50.003 [2024-12-06T18:29:35.052Z] =================================================================================================================== 00:30:50.003 [2024-12-06T18:29:35.052Z] Total : 9424.76 36.82 0.00 0.00 108211.55 20680.25 69128.34 00:30:50.003 { 00:30:50.003 "results": [ 00:30:50.003 { 00:30:50.003 "job": "NVMe0n1", 00:30:50.003 "core_mask": "0x1", 00:30:50.003 "workload": "verify", 00:30:50.003 "status": "finished", 00:30:50.003 "verify_range": { 00:30:50.003 "start": 0, 00:30:50.003 "length": 16384 00:30:50.003 }, 00:30:50.003 "queue_depth": 1024, 00:30:50.003 "io_size": 4096, 00:30:50.003 "runtime": 10.090651, 00:30:50.003 "iops": 9424.763575709832, 00:30:50.003 "mibps": 36.81548271761653, 00:30:50.003 "io_failed": 0, 00:30:50.003 "io_timeout": 0, 00:30:50.003 "avg_latency_us": 108211.55306260646, 00:30:50.003 "min_latency_us": 20680.248888888887, 00:30:50.003 "max_latency_us": 69128.34370370371 00:30:50.003 } 00:30:50.003 ], 00:30:50.003 "core_count": 1 00:30:50.003 } 00:30:50.003 19:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 361638 00:30:50.003 19:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 361638 ']' 00:30:50.003 19:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 361638 00:30:50.003 19:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:50.003 19:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:50.003 19:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 361638 00:30:50.003 19:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:50.003 19:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:50.003 19:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 361638' 00:30:50.003 killing process with pid 361638 00:30:50.003 19:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 361638 00:30:50.003 Received shutdown signal, test time was about 10.000000 seconds 00:30:50.003 00:30:50.003 Latency(us) 00:30:50.003 [2024-12-06T18:29:35.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.003 [2024-12-06T18:29:35.052Z] =================================================================================================================== 00:30:50.003 [2024-12-06T18:29:35.052Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:50.003 19:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 361638 00:30:50.003 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:50.003 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:50.003 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:50.003 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:50.003 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:50.003 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:50.003 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:50.003 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:50.003 rmmod nvme_tcp 00:30:50.003 rmmod nvme_fabrics 00:30:50.264 rmmod nvme_keyring 00:30:50.264 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:50.264 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:50.264 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:50.264 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 361491 ']' 00:30:50.264 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 361491 00:30:50.264 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 361491 ']' 00:30:50.264 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 361491 00:30:50.264 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:50.264 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:50.264 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 361491 00:30:50.264 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:50.264 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:50.264 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 361491' 00:30:50.264 killing process with pid 361491 00:30:50.264 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 361491 00:30:50.264 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 361491 00:30:50.520 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:50.520 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:50.521 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:50.521 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:50.521 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:50.521 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:50.521 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:50.521 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:50.521 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:50.521 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.521 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.521 19:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.421 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:52.421 00:30:52.421 real 0m16.337s 00:30:52.421 user 0m22.394s 00:30:52.421 sys 0m3.703s 00:30:52.421 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:52.421 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:52.421 ************************************ 00:30:52.421 END TEST nvmf_queue_depth 00:30:52.421 ************************************ 00:30:52.421 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:52.421 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:52.421 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:52.421 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:52.680 ************************************ 00:30:52.680 START TEST nvmf_target_multipath 00:30:52.680 ************************************ 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:52.680 * Looking for test storage... 00:30:52.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:52.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.680 --rc genhtml_branch_coverage=1 00:30:52.680 --rc genhtml_function_coverage=1 00:30:52.680 --rc genhtml_legend=1 00:30:52.680 --rc geninfo_all_blocks=1 00:30:52.680 --rc geninfo_unexecuted_blocks=1 00:30:52.680 00:30:52.680 ' 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:52.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.680 --rc genhtml_branch_coverage=1 00:30:52.680 --rc genhtml_function_coverage=1 00:30:52.680 --rc genhtml_legend=1 00:30:52.680 --rc geninfo_all_blocks=1 00:30:52.680 --rc geninfo_unexecuted_blocks=1 00:30:52.680 00:30:52.680 ' 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:52.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.680 --rc genhtml_branch_coverage=1 00:30:52.680 --rc genhtml_function_coverage=1 00:30:52.680 --rc genhtml_legend=1 00:30:52.680 --rc geninfo_all_blocks=1 00:30:52.680 --rc geninfo_unexecuted_blocks=1 00:30:52.680 00:30:52.680 ' 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:52.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:52.680 --rc genhtml_branch_coverage=1 00:30:52.680 --rc genhtml_function_coverage=1 00:30:52.680 --rc genhtml_legend=1 00:30:52.680 --rc geninfo_all_blocks=1 00:30:52.680 --rc geninfo_unexecuted_blocks=1 00:30:52.680 00:30:52.680 ' 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.680 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:52.681 19:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:55.216 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:55.216 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:55.216 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:55.217 Found net devices under 0000:84:00.0: cvl_0_0 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:55.217 Found net devices under 0000:84:00.1: cvl_0_1 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:55.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:55.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:30:55.217 00:30:55.217 --- 10.0.0.2 ping statistics --- 00:30:55.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.217 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:55.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:55.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:30:55.217 00:30:55.217 --- 10.0.0.1 ping statistics --- 00:30:55.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.217 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:55.217 only one NIC for nvmf test 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:55.217 rmmod nvme_tcp 00:30:55.217 rmmod nvme_fabrics 00:30:55.217 rmmod nvme_keyring 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:55.217 19:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:55.217 19:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.218 19:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.218 19:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:57.124 00:30:57.124 real 0m4.589s 00:30:57.124 user 0m0.983s 00:30:57.124 sys 0m1.607s 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:57.124 ************************************ 00:30:57.124 END TEST nvmf_target_multipath 00:30:57.124 ************************************ 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:57.124 ************************************ 00:30:57.124 START TEST nvmf_zcopy 00:30:57.124 ************************************ 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:57.124 * Looking for test storage... 00:30:57.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:57.124 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:57.125 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:30:57.125 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:57.385 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:57.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.386 --rc genhtml_branch_coverage=1 00:30:57.386 --rc genhtml_function_coverage=1 00:30:57.386 --rc genhtml_legend=1 00:30:57.386 --rc geninfo_all_blocks=1 00:30:57.386 --rc geninfo_unexecuted_blocks=1 00:30:57.386 00:30:57.386 ' 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:57.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.386 --rc genhtml_branch_coverage=1 00:30:57.386 --rc genhtml_function_coverage=1 00:30:57.386 --rc genhtml_legend=1 00:30:57.386 --rc geninfo_all_blocks=1 00:30:57.386 --rc geninfo_unexecuted_blocks=1 00:30:57.386 00:30:57.386 ' 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:57.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.386 --rc genhtml_branch_coverage=1 00:30:57.386 --rc genhtml_function_coverage=1 00:30:57.386 --rc genhtml_legend=1 00:30:57.386 --rc geninfo_all_blocks=1 00:30:57.386 --rc geninfo_unexecuted_blocks=1 00:30:57.386 00:30:57.386 ' 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:57.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.386 --rc genhtml_branch_coverage=1 00:30:57.386 --rc genhtml_function_coverage=1 00:30:57.386 --rc genhtml_legend=1 00:30:57.386 --rc geninfo_all_blocks=1 00:30:57.386 --rc geninfo_unexecuted_blocks=1 00:30:57.386 00:30:57.386 ' 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:57.386 19:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.298 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.298 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:59.298 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:59.298 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:59.298 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:59.298 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:59.298 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:59.298 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:59.298 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:59.298 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:59.298 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:59.298 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:59.298 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:59.298 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:59.298 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:59.299 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:59.299 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:59.299 Found net devices under 0000:84:00.0: cvl_0_0 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:59.299 Found net devices under 0000:84:00.1: cvl_0_1 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.299 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:59.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:30:59.560 00:30:59.560 --- 10.0.0.2 ping statistics --- 00:30:59.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.560 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:30:59.560 00:30:59.560 --- 10.0.0.1 ping statistics --- 00:30:59.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.560 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=366827 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 366827 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 366827 ']' 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:59.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:59.560 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.560 [2024-12-06 19:29:44.444885] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:59.560 [2024-12-06 19:29:44.446014] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:30:59.560 [2024-12-06 19:29:44.446087] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:59.560 [2024-12-06 19:29:44.519428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.560 [2024-12-06 19:29:44.575414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.560 [2024-12-06 19:29:44.575485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.560 [2024-12-06 19:29:44.575499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.560 [2024-12-06 19:29:44.575510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.560 [2024-12-06 19:29:44.575520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.560 [2024-12-06 19:29:44.576288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.820 [2024-12-06 19:29:44.674609] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:59.820 [2024-12-06 19:29:44.674900] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.820 [2024-12-06 19:29:44.724983] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.820 [2024-12-06 19:29:44.741182] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.820 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.821 malloc0 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:59.821 { 00:30:59.821 "params": { 00:30:59.821 "name": "Nvme$subsystem", 00:30:59.821 "trtype": "$TEST_TRANSPORT", 00:30:59.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:59.821 "adrfam": "ipv4", 00:30:59.821 "trsvcid": "$NVMF_PORT", 00:30:59.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:59.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:59.821 "hdgst": ${hdgst:-false}, 00:30:59.821 "ddgst": ${ddgst:-false} 00:30:59.821 }, 00:30:59.821 "method": "bdev_nvme_attach_controller" 00:30:59.821 } 00:30:59.821 EOF 00:30:59.821 )") 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:59.821 19:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:59.821 "params": { 00:30:59.821 "name": "Nvme1", 00:30:59.821 "trtype": "tcp", 00:30:59.821 "traddr": "10.0.0.2", 00:30:59.821 "adrfam": "ipv4", 00:30:59.821 "trsvcid": "4420", 00:30:59.821 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:59.821 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:59.821 "hdgst": false, 00:30:59.821 "ddgst": false 00:30:59.821 }, 00:30:59.821 "method": "bdev_nvme_attach_controller" 00:30:59.821 }' 00:30:59.821 [2024-12-06 19:29:44.829541] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:30:59.821 [2024-12-06 19:29:44.829624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366867 ] 00:31:00.081 [2024-12-06 19:29:44.901559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.081 [2024-12-06 19:29:44.961346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.339 Running I/O for 10 seconds... 00:31:02.215 5999.00 IOPS, 46.87 MiB/s [2024-12-06T18:29:48.205Z] 6093.00 IOPS, 47.60 MiB/s [2024-12-06T18:29:49.585Z] 6103.00 IOPS, 47.68 MiB/s [2024-12-06T18:29:50.529Z] 6122.75 IOPS, 47.83 MiB/s [2024-12-06T18:29:51.467Z] 6107.60 IOPS, 47.72 MiB/s [2024-12-06T18:29:52.400Z] 6091.17 IOPS, 47.59 MiB/s [2024-12-06T18:29:53.338Z] 6142.43 IOPS, 47.99 MiB/s [2024-12-06T18:29:54.274Z] 6163.25 IOPS, 48.15 MiB/s [2024-12-06T18:29:55.207Z] 6184.78 IOPS, 48.32 MiB/s [2024-12-06T18:29:55.207Z] 6200.90 IOPS, 48.44 MiB/s 00:31:10.158 Latency(us) 00:31:10.158 [2024-12-06T18:29:55.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:10.158 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:10.158 Verification LBA range: start 0x0 length 0x1000 00:31:10.158 Nvme1n1 : 10.02 6202.05 48.45 0.00 0.00 20584.64 3737.98 26214.40 00:31:10.158 [2024-12-06T18:29:55.207Z] =================================================================================================================== 00:31:10.158 [2024-12-06T18:29:55.207Z] Total : 6202.05 48.45 0.00 0.00 20584.64 3737.98 26214.40 00:31:10.415 19:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=368049 00:31:10.415 19:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:10.415 19:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:10.415 19:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:10.415 19:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:10.415 19:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:10.415 19:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:10.415 19:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:10.415 19:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:10.415 { 00:31:10.415 "params": { 00:31:10.415 "name": "Nvme$subsystem", 00:31:10.415 "trtype": "$TEST_TRANSPORT", 00:31:10.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.415 "adrfam": "ipv4", 00:31:10.415 "trsvcid": "$NVMF_PORT", 00:31:10.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.415 "hdgst": ${hdgst:-false}, 00:31:10.415 "ddgst": ${ddgst:-false} 00:31:10.415 }, 00:31:10.415 "method": "bdev_nvme_attach_controller" 00:31:10.415 } 00:31:10.415 EOF 00:31:10.415 )") 00:31:10.415 [2024-12-06 19:29:55.416872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.415 [2024-12-06 19:29:55.416915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.415 19:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:10.415 19:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:10.415 19:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:10.415 19:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:10.415 "params": { 00:31:10.415 "name": "Nvme1", 00:31:10.415 "trtype": "tcp", 00:31:10.415 "traddr": "10.0.0.2", 00:31:10.415 "adrfam": "ipv4", 00:31:10.415 "trsvcid": "4420", 00:31:10.415 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:10.415 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:10.415 "hdgst": false, 00:31:10.415 "ddgst": false 00:31:10.415 }, 00:31:10.415 "method": "bdev_nvme_attach_controller" 00:31:10.415 }' 00:31:10.415 [2024-12-06 19:29:55.424808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.415 [2024-12-06 19:29:55.424833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.415 [2024-12-06 19:29:55.432804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.415 [2024-12-06 19:29:55.432827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.415 [2024-12-06 19:29:55.440801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.415 [2024-12-06 19:29:55.440823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.415 [2024-12-06 19:29:55.448801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.415 [2024-12-06 19:29:55.448822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.415 [2024-12-06 19:29:55.456800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.415 [2024-12-06 19:29:55.456821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.415 [2024-12-06 19:29:55.458292] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:31:10.415 [2024-12-06 19:29:55.458351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid368049 ] 00:31:10.673 [2024-12-06 19:29:55.464809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.464833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.472801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.472823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.480802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.480823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.488799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.488820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.496815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.496837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.504815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.504837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.512814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.512837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.520811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.520833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.527991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.673 [2024-12-06 19:29:55.528812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.528833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.536842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.536878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.544846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.544886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.552810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.552831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.560810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.560831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.568809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.568830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.576809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.576830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.584814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.584836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.589297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.673 [2024-12-06 19:29:55.592809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.592830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.600808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.600829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.608844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.608879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.616846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.616882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.624844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.624881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.632842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.632880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.640843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.640879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.648841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.648877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.656814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.656837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.664834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.664864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.672849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.672883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.680862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.680899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.688818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.688841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.696814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.696835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.704825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.704851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.712818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.712845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.673 [2024-12-06 19:29:55.720820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.673 [2024-12-06 19:29:55.720845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.728818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.728843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.736818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.736843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.744827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.744852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.752814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.752837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.760812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.760833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.768812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.768834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.776812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.776833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.784811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.784832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.792818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.792841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.800811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.800833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.808811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.808832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.816813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.816833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.824810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.824831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.832818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.832842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.840812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.840834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.848811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.848833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.856814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.856837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.864814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.864835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.872813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.872834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.881181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.881204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.888816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.888841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.896813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.896837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 Running I/O for 5 seconds... 00:31:10.934 [2024-12-06 19:29:55.913223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.913250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.924224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.924250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.935465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.935491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.950330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.950357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.959658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.959685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.934 [2024-12-06 19:29:55.973622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.934 [2024-12-06 19:29:55.973658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:55.983924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:55.983952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:55.997759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:55.997786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.008266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.008291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.020536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.020562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.031485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.031510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.042401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.042426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.053754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.053796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.064948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.064976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.076422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.076455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.087298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.087323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.101258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.101283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.111412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.111437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.126172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.126197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.135453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.135478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.147269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.147295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.161098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.161123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.171380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.171405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.183362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.183387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.196661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.196716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.206622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.206647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.218919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.218946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.195 [2024-12-06 19:29:56.234633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.195 [2024-12-06 19:29:56.234658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.454 [2024-12-06 19:29:56.244581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.244614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.256556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.256581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.267831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.267859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.279361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.279393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.292688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.292739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.302867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.302893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.314826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.314853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.331473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.331499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.346567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.346592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.356111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.356137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.367596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.367621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.381866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.381894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.391285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.391311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.403092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.403118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.417668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.417692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.427558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.427608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.441717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.441757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.451801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.451829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.466102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.466126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.476064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.476104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.489567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.489591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.455 [2024-12-06 19:29:56.498458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.455 [2024-12-06 19:29:56.498482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.509739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.509780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.519932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.519958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.533946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.533972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.542904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.542931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.553895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.553921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.563798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.563824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.577590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.577614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.586873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.586900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.598279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.598303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.608585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.608609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.619145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.619169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.633971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.633998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.643610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.643635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.654837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.654864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.668762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.668802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.677499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.677523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.688694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.688745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.698841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.698868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.712695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.712747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.722170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.722194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.732954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.713 [2024-12-06 19:29:56.732980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.713 [2024-12-06 19:29:56.743352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.714 [2024-12-06 19:29:56.743377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.714 [2024-12-06 19:29:56.757825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.714 [2024-12-06 19:29:56.757855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.767188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.767213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.782515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.782540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.791971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.792011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.805255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.805280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.814845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.814872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.826056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.826096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.836211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.836235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.848202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.848227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.857668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.857693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.868786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.868816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.878554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.878578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.890087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.890114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.900407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.900431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 11771.00 IOPS, 91.96 MiB/s [2024-12-06T18:29:57.021Z] [2024-12-06 19:29:56.910456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.910479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.921657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.921681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.932804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.932831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.943523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.943547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.955875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.955901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.966483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.966507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.977923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.977951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:56.988445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:56.988469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:57.001571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:57.001595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:11.972 [2024-12-06 19:29:57.010843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:11.972 [2024-12-06 19:29:57.010868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.022507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.022532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.033056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.033093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.043429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.043453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.056953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.056979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.066416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.066440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.077660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.077684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.087603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.087626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.102775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.102800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.112270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.112294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.123567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.123591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.137227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.137251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.147153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.147177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.158518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.158543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.169321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.169345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.180463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.180487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.192550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.192574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.201846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.201871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.213280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.213304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.223858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.223884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.236412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.236436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.247804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.247830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.261870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.261895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.231 [2024-12-06 19:29:57.270872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.231 [2024-12-06 19:29:57.270906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.491 [2024-12-06 19:29:57.282764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.491 [2024-12-06 19:29:57.282801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.491 [2024-12-06 19:29:57.298121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.491 [2024-12-06 19:29:57.298160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.491 [2024-12-06 19:29:57.307636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.491 [2024-12-06 19:29:57.307660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.491 [2024-12-06 19:29:57.322796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.491 [2024-12-06 19:29:57.322822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.491 [2024-12-06 19:29:57.332107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.491 [2024-12-06 19:29:57.332131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.491 [2024-12-06 19:29:57.343631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.491 [2024-12-06 19:29:57.343655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.491 [2024-12-06 19:29:57.355993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.491 [2024-12-06 19:29:57.356032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.491 [2024-12-06 19:29:57.370041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.491 [2024-12-06 19:29:57.370066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.491 [2024-12-06 19:29:57.379283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.491 [2024-12-06 19:29:57.379307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.491 [2024-12-06 19:29:57.391421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.491 [2024-12-06 19:29:57.391446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.491 [2024-12-06 19:29:57.404254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.491 [2024-12-06 19:29:57.404278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.491 [2024-12-06 19:29:57.418263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.491 [2024-12-06 19:29:57.418286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.491 [2024-12-06 19:29:57.428146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.491 [2024-12-06 19:29:57.428170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.492 [2024-12-06 19:29:57.439618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.492 [2024-12-06 19:29:57.439644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.492 [2024-12-06 19:29:57.455169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.492 [2024-12-06 19:29:57.455193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.492 [2024-12-06 19:29:57.464419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.492 [2024-12-06 19:29:57.464443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.492 [2024-12-06 19:29:57.476265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.492 [2024-12-06 19:29:57.476289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.492 [2024-12-06 19:29:57.488646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.492 [2024-12-06 19:29:57.488670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.492 [2024-12-06 19:29:57.498650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.492 [2024-12-06 19:29:57.498683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.492 [2024-12-06 19:29:57.510173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.492 [2024-12-06 19:29:57.510196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.492 [2024-12-06 19:29:57.520980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.492 [2024-12-06 19:29:57.521019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.492 [2024-12-06 19:29:57.531787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.492 [2024-12-06 19:29:57.531814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.752 [2024-12-06 19:29:57.545450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.752 [2024-12-06 19:29:57.545476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.752 [2024-12-06 19:29:57.555243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.752 [2024-12-06 19:29:57.555267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.752 [2024-12-06 19:29:57.566820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.752 [2024-12-06 19:29:57.566845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.752 [2024-12-06 19:29:57.581867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.752 [2024-12-06 19:29:57.581892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.752 [2024-12-06 19:29:57.591193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.752 [2024-12-06 19:29:57.591216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.752 [2024-12-06 19:29:57.603148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.752 [2024-12-06 19:29:57.603172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.752 [2024-12-06 19:29:57.615758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.752 [2024-12-06 19:29:57.615784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.752 [2024-12-06 19:29:57.629986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.752 [2024-12-06 19:29:57.630025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.752 [2024-12-06 19:29:57.639293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.752 [2024-12-06 19:29:57.639316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.752 [2024-12-06 19:29:57.650877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.752 [2024-12-06 19:29:57.650901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.752 [2024-12-06 19:29:57.664806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.752 [2024-12-06 19:29:57.664831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.752 [2024-12-06 19:29:57.673889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.752 [2024-12-06 19:29:57.673914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.752 [2024-12-06 19:29:57.685336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.752 [2024-12-06 19:29:57.685359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.752 [2024-12-06 19:29:57.695268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.752 [2024-12-06 19:29:57.695292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.752 [2024-12-06 19:29:57.709182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.752 [2024-12-06 19:29:57.709206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.752 [2024-12-06 19:29:57.718605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.753 [2024-12-06 19:29:57.718636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.753 [2024-12-06 19:29:57.729883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.753 [2024-12-06 19:29:57.729908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.753 [2024-12-06 19:29:57.739536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.753 [2024-12-06 19:29:57.739560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.753 [2024-12-06 19:29:57.754963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.753 [2024-12-06 19:29:57.754988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.753 [2024-12-06 19:29:57.764289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.753 [2024-12-06 19:29:57.764313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.753 [2024-12-06 19:29:57.775951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.753 [2024-12-06 19:29:57.775977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.753 [2024-12-06 19:29:57.788550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.753 [2024-12-06 19:29:57.788574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:12.753 [2024-12-06 19:29:57.797976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:12.753 [2024-12-06 19:29:57.798016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:57.809659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.809684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:57.820482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.820505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:57.832831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.832857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:57.842277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.842301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:57.853352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.853375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:57.864324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.864349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:57.877122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.877146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:57.886402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.886426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:57.898130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.898154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 11867.00 IOPS, 92.71 MiB/s [2024-12-06T18:29:58.060Z] [2024-12-06 19:29:57.908975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.909000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:57.919356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.919379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:57.932892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.932918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:57.942793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.942819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:57.954568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.954592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:57.965240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.965263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:57.976233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.976257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:57.986309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.986334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:57.997616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:57.997641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:58.008272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:58.008297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:58.020923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:58.020949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:58.030965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:58.030991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:58.042340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:58.042365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.011 [2024-12-06 19:29:58.053119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.011 [2024-12-06 19:29:58.053144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.268 [2024-12-06 19:29:58.063740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.268 [2024-12-06 19:29:58.063766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.268 [2024-12-06 19:29:58.079387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.268 [2024-12-06 19:29:58.079412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.268 [2024-12-06 19:29:58.094620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.268 [2024-12-06 19:29:58.094644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.268 [2024-12-06 19:29:58.104157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.268 [2024-12-06 19:29:58.104182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.268 [2024-12-06 19:29:58.115476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.268 [2024-12-06 19:29:58.115501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.268 [2024-12-06 19:29:58.129272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.268 [2024-12-06 19:29:58.129296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.268 [2024-12-06 19:29:58.138297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.268 [2024-12-06 19:29:58.138320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.268 [2024-12-06 19:29:58.149845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.268 [2024-12-06 19:29:58.149870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.268 [2024-12-06 19:29:58.160572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.268 [2024-12-06 19:29:58.160596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.268 [2024-12-06 19:29:58.171244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.268 [2024-12-06 19:29:58.171270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.268 [2024-12-06 19:29:58.184807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.268 [2024-12-06 19:29:58.184833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.268 [2024-12-06 19:29:58.193931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.268 [2024-12-06 19:29:58.193957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.268 [2024-12-06 19:29:58.204998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.268 [2024-12-06 19:29:58.205036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.268 [2024-12-06 19:29:58.215083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.268 [2024-12-06 19:29:58.215106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.269 [2024-12-06 19:29:58.228056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.269 [2024-12-06 19:29:58.228094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.269 [2024-12-06 19:29:58.237936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.269 [2024-12-06 19:29:58.237962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.269 [2024-12-06 19:29:58.249427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.269 [2024-12-06 19:29:58.249451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.269 [2024-12-06 19:29:58.259786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.269 [2024-12-06 19:29:58.259812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.269 [2024-12-06 19:29:58.274208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.269 [2024-12-06 19:29:58.274232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.269 [2024-12-06 19:29:58.284059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.269 [2024-12-06 19:29:58.284097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.269 [2024-12-06 19:29:58.294907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.269 [2024-12-06 19:29:58.294932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.269 [2024-12-06 19:29:58.303999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.269 [2024-12-06 19:29:58.304039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.318562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.318585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.328153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.328177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.339363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.339387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.354239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.354263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.363475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.363499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.378343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.378367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.387501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.387525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.401444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.401471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.411148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.411172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.422629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.422653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.436472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.436496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.445894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.445920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.457293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.457318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.467617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.467651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.481755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.481783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.491756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.491789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.507253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.507277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.524145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.524169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.533698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.533745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.544821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.544847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.554644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.554668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.526 [2024-12-06 19:29:58.568981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.526 [2024-12-06 19:29:58.569012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.578225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.578268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.589802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.589827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.600756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.600781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.611374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.611399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.626641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.626669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.635801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.635837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.650809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.650835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.660586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.660616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.672141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.672164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.686549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.686579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.696122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.696145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.707495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.707519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.723216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.723239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.732685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.732736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.744153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.744177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.754244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.754267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.765302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.765336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.775696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.775743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.791297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.791321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.808274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.808311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.818089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.818113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:13.784 [2024-12-06 19:29:58.829474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:13.784 [2024-12-06 19:29:58.829498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:58.840127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:58.840156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:58.853349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:58.853373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:58.862227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:58.862251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:58.873310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:58.873334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:58.883825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:58.883850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:58.898163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:58.898198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:58.907639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:58.907663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 11936.00 IOPS, 93.25 MiB/s [2024-12-06T18:29:59.091Z] [2024-12-06 19:29:58.921263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:58.921287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:58.930831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:58.930857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:58.941873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:58.941899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:58.952311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:58.952335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:58.962563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:58.962586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:58.977792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:58.977818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:58.987033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:58.987058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:58.998354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:58.998378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:59.008664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:59.008688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:59.018862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:59.018897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:59.033526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:59.033550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:59.042460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:59.042490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:59.054091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:59.054114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:59.064732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:59.064757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:59.074625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:59.074649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.042 [2024-12-06 19:29:59.090375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.042 [2024-12-06 19:29:59.090398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.099589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.099613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.112922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.112948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.123033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.123058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.134038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.134064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.144323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.144347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.154535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.154577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.169782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.169808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.179843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.179868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.194959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.194984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.204297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.204321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.215682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.215731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.229499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.229523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.239248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.239281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.253516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.253540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.262869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.262895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.274050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.274091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.284430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.284453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.297174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.297197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.306139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.306162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.317545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.317568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.328309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.328333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.303 [2024-12-06 19:29:59.339129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.303 [2024-12-06 19:29:59.339153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.562 [2024-12-06 19:29:59.354929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.562 [2024-12-06 19:29:59.354956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.562 [2024-12-06 19:29:59.364251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.562 [2024-12-06 19:29:59.364275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.375612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.375636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.390129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.390153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.399897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.399922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.415325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.415350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.430262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.430286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.439791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.439818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.454649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.454673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.464759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.464795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.476426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.476451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.487226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.487251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.499885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.499911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.513957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.513983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.523193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.523217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.534323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.534347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.544633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.544657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.555057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.555092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.568391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.568416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.578269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.578293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.589326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.589350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.563 [2024-12-06 19:29:59.598965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.563 [2024-12-06 19:29:59.598991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.614812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.614840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.631664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.631689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.646255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.646280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.655320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.655344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.666860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.666886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.682408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.682432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.691612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.691635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.704783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.704809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.714250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.714274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.725352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.725375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.735434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.735458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.747083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.747107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.759618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.759642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.773752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.773793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.783086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.783110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.794336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.794359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.804770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.804797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.815365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.815389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.830316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.830340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.839430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.839453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.850760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.850785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.824 [2024-12-06 19:29:59.865984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:14.824 [2024-12-06 19:29:59.866024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:29:59.875376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:29:59.875403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:29:59.886953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:29:59.886978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:29:59.900371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:29:59.900395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:29:59.909862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:29:59.909888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 11967.25 IOPS, 93.49 MiB/s [2024-12-06T18:30:00.133Z] [2024-12-06 19:29:59.921283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:29:59.921306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:29:59.931418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:29:59.931441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:29:59.946306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:29:59.946330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:29:59.955549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:29:59.955572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:29:59.970860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:29:59.970885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:29:59.987446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:29:59.987470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:29:59.997244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:29:59.997269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:30:00.011619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:30:00.011648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:30:00.026132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:30:00.026160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:30:00.036277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:30:00.036312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:30:00.048391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:30:00.048415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:30:00.059178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:30:00.059205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:30:00.073710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:30:00.073742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:30:00.083323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:30:00.083347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:30:00.094698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:30:00.094746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:30:00.110404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:30:00.110428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:30:00.120159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:30:00.120183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.084 [2024-12-06 19:30:00.132301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.084 [2024-12-06 19:30:00.132336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.343 [2024-12-06 19:30:00.146873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.343 [2024-12-06 19:30:00.146900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.343 [2024-12-06 19:30:00.156622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.343 [2024-12-06 19:30:00.156646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.168292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.168315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.182706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.182768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.192487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.192510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.204276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.204323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.218675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.218715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.227762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.227787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.241119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.241144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.250748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.250790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.261879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.261905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.272824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.272851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.283213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.283237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.298188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.298212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.307867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.307894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.322171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.322197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.331650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.331675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.346336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.346361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.355506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.355552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.369956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.369984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.344 [2024-12-06 19:30:00.379449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.344 [2024-12-06 19:30:00.379473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.602 [2024-12-06 19:30:00.395367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.602 [2024-12-06 19:30:00.395401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.602 [2024-12-06 19:30:00.410757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.602 [2024-12-06 19:30:00.410785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.602 [2024-12-06 19:30:00.420350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.602 [2024-12-06 19:30:00.420374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.602 [2024-12-06 19:30:00.431651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.602 [2024-12-06 19:30:00.431675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.602 [2024-12-06 19:30:00.446265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.602 [2024-12-06 19:30:00.446289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.603 [2024-12-06 19:30:00.455612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.603 [2024-12-06 19:30:00.455637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.603 [2024-12-06 19:30:00.469559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.603 [2024-12-06 19:30:00.469583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.603 [2024-12-06 19:30:00.479282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.603 [2024-12-06 19:30:00.479315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.603 [2024-12-06 19:30:00.490994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.603 [2024-12-06 19:30:00.491033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.603 [2024-12-06 19:30:00.505161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.603 [2024-12-06 19:30:00.505185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.603 [2024-12-06 19:30:00.514367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.603 [2024-12-06 19:30:00.514391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.603 [2024-12-06 19:30:00.526048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.603 [2024-12-06 19:30:00.526073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.603 [2024-12-06 19:30:00.536256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.603 [2024-12-06 19:30:00.536280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.603 [2024-12-06 19:30:00.547050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.603 [2024-12-06 19:30:00.547089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.603 [2024-12-06 19:30:00.562379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.603 [2024-12-06 19:30:00.562402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.603 [2024-12-06 19:30:00.572115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.603 [2024-12-06 19:30:00.572139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.603 [2024-12-06 19:30:00.583817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.603 [2024-12-06 19:30:00.583852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.603 [2024-12-06 19:30:00.596533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.603 [2024-12-06 19:30:00.596557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.603 [2024-12-06 19:30:00.606062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.603 [2024-12-06 19:30:00.606106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.603 [2024-12-06 19:30:00.617565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.603 [2024-12-06 19:30:00.617589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.603 [2024-12-06 19:30:00.628244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.603 [2024-12-06 19:30:00.628268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.603 [2024-12-06 19:30:00.639032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.603 [2024-12-06 19:30:00.639057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.862 [2024-12-06 19:30:00.654484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.862 [2024-12-06 19:30:00.654509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.862 [2024-12-06 19:30:00.664156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.862 [2024-12-06 19:30:00.664181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.862 [2024-12-06 19:30:00.675690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.862 [2024-12-06 19:30:00.675743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.862 [2024-12-06 19:30:00.688211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.862 [2024-12-06 19:30:00.688235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.697746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.697786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.709056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.709095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.719619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.719654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.733670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.733728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.742849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.742875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.754518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.754542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.770203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.770227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.780545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.780569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.791888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.791914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.802849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.802874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.817638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.817663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.826351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.826375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.838330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.838354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.848982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.849020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.859118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.859148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.873537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.873561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.882620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.882644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.863 [2024-12-06 19:30:00.893887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:15.863 [2024-12-06 19:30:00.893912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:00.912782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:00.912809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 11939.80 IOPS, 93.28 MiB/s [2024-12-06T18:30:01.171Z] [2024-12-06 19:30:00.922075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:00.922100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 00:31:16.122 Latency(us) 00:31:16.122 [2024-12-06T18:30:01.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:16.122 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:16.122 Nvme1n1 : 5.01 11940.87 93.29 0.00 0.00 10706.32 2597.17 18641.35 00:31:16.122 [2024-12-06T18:30:01.171Z] =================================================================================================================== 00:31:16.122 [2024-12-06T18:30:01.171Z] Total : 11940.87 93.29 0.00 0.00 10706.32 2597.17 18641.35 00:31:16.122 [2024-12-06 19:30:00.928836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:00.928860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:00.936820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:00.936845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:00.944794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:00.944815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:00.952857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:00.952903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:00.960856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:00.960903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:00.968863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:00.968909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:00.976853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:00.976899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:00.984854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:00.984901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:00.992859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:00.992905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:01.000854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:01.000899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:01.008858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:01.008904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:01.016857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:01.016904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:01.024860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:01.024907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:01.032858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:01.032907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:01.040858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:01.040905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:01.048857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:01.048902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:01.056838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:01.056874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:01.064816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:01.064837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:01.072814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:01.072834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:01.080812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:01.080833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:01.088806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:01.088828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:01.096860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:01.096905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:01.104851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:01.104894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.122 [2024-12-06 19:30:01.112813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.122 [2024-12-06 19:30:01.112843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.123 [2024-12-06 19:30:01.120811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.123 [2024-12-06 19:30:01.120831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.123 [2024-12-06 19:30:01.128799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:16.123 [2024-12-06 19:30:01.128819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (368049) - No such process 00:31:16.123 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 368049 00:31:16.123 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.123 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.123 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:16.123 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.123 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:16.123 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.123 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:16.123 delay0 00:31:16.123 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.123 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:16.123 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.123 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:16.123 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.123 19:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:16.380 [2024-12-06 19:30:01.206445] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:22.945 Initializing NVMe Controllers 00:31:22.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:22.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:22.945 Initialization complete. Launching workers. 00:31:22.945 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 229 00:31:22.945 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 516, failed to submit 33 00:31:22.945 success 410, unsuccessful 106, failed 0 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:22.945 rmmod nvme_tcp 00:31:22.945 rmmod nvme_fabrics 00:31:22.945 rmmod nvme_keyring 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 366827 ']' 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 366827 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 366827 ']' 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 366827 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 366827 00:31:22.945 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:22.946 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:22.946 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 366827' 00:31:22.946 killing process with pid 366827 00:31:22.946 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 366827 00:31:22.946 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 366827 00:31:22.946 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:22.946 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:22.946 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:22.946 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:22.946 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:22.946 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:22.946 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:22.946 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:22.946 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:22.946 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.946 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.946 19:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.848 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:24.848 00:31:24.848 real 0m27.645s 00:31:24.848 user 0m37.896s 00:31:24.848 sys 0m11.043s 00:31:24.848 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:24.848 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:24.848 ************************************ 00:31:24.848 END TEST nvmf_zcopy 00:31:24.848 ************************************ 00:31:24.848 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:24.848 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:24.848 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:24.848 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:24.848 ************************************ 00:31:24.848 START TEST nvmf_nmic 00:31:24.848 ************************************ 00:31:24.848 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:24.848 * Looking for test storage... 00:31:24.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:24.848 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:24.848 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:31:24.848 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:25.106 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:25.106 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:25.106 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:25.106 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:25.106 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:25.106 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:25.106 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:25.106 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:25.106 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:25.106 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:25.106 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:25.106 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:25.106 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:25.106 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:25.106 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:25.106 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:25.106 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:25.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.107 --rc genhtml_branch_coverage=1 00:31:25.107 --rc genhtml_function_coverage=1 00:31:25.107 --rc genhtml_legend=1 00:31:25.107 --rc geninfo_all_blocks=1 00:31:25.107 --rc geninfo_unexecuted_blocks=1 00:31:25.107 00:31:25.107 ' 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:25.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.107 --rc genhtml_branch_coverage=1 00:31:25.107 --rc genhtml_function_coverage=1 00:31:25.107 --rc genhtml_legend=1 00:31:25.107 --rc geninfo_all_blocks=1 00:31:25.107 --rc geninfo_unexecuted_blocks=1 00:31:25.107 00:31:25.107 ' 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:25.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.107 --rc genhtml_branch_coverage=1 00:31:25.107 --rc genhtml_function_coverage=1 00:31:25.107 --rc genhtml_legend=1 00:31:25.107 --rc geninfo_all_blocks=1 00:31:25.107 --rc geninfo_unexecuted_blocks=1 00:31:25.107 00:31:25.107 ' 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:25.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.107 --rc genhtml_branch_coverage=1 00:31:25.107 --rc genhtml_function_coverage=1 00:31:25.107 --rc genhtml_legend=1 00:31:25.107 --rc geninfo_all_blocks=1 00:31:25.107 --rc geninfo_unexecuted_blocks=1 00:31:25.107 00:31:25.107 ' 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:25.107 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:25.108 19:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:27.638 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:27.638 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:27.638 Found net devices under 0000:84:00.0: cvl_0_0 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:27.638 Found net devices under 0000:84:00.1: cvl_0_1 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:27.638 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:27.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:27.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:31:27.639 00:31:27.639 --- 10.0.0.2 ping statistics --- 00:31:27.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.639 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:27.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:27.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:31:27.639 00:31:27.639 --- 10.0.0.1 ping statistics --- 00:31:27.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.639 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=372051 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 372051 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 372051 ']' 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:27.639 [2024-12-06 19:30:12.287896] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:27.639 [2024-12-06 19:30:12.289040] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:31:27.639 [2024-12-06 19:30:12.289106] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:27.639 [2024-12-06 19:30:12.364402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:27.639 [2024-12-06 19:30:12.425632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:27.639 [2024-12-06 19:30:12.425696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:27.639 [2024-12-06 19:30:12.425709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:27.639 [2024-12-06 19:30:12.425729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:27.639 [2024-12-06 19:30:12.425755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:27.639 [2024-12-06 19:30:12.427491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.639 [2024-12-06 19:30:12.427548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:27.639 [2024-12-06 19:30:12.427615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:27.639 [2024-12-06 19:30:12.427618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.639 [2024-12-06 19:30:12.527685] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:27.639 [2024-12-06 19:30:12.527924] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:27.639 [2024-12-06 19:30:12.528226] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:27.639 [2024-12-06 19:30:12.528839] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:27.639 [2024-12-06 19:30:12.529068] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:27.639 [2024-12-06 19:30:12.580412] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:27.639 Malloc0 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:27.639 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:27.640 [2024-12-06 19:30:12.652537] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:27.640 test case1: single bdev can't be used in multiple subsystems 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:27.640 [2024-12-06 19:30:12.676304] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:27.640 [2024-12-06 19:30:12.676332] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:27.640 [2024-12-06 19:30:12.676353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:27.640 request: 00:31:27.640 { 00:31:27.640 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:27.640 "namespace": { 00:31:27.640 "bdev_name": "Malloc0", 00:31:27.640 "no_auto_visible": false, 00:31:27.640 "hide_metadata": false 00:31:27.640 }, 00:31:27.640 "method": "nvmf_subsystem_add_ns", 00:31:27.640 "req_id": 1 00:31:27.640 } 00:31:27.640 Got JSON-RPC error response 00:31:27.640 response: 00:31:27.640 { 00:31:27.640 "code": -32602, 00:31:27.640 "message": "Invalid parameters" 00:31:27.640 } 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:27.640 Adding namespace failed - expected result. 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:27.640 test case2: host connect to nvmf target in multiple paths 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.640 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:27.640 [2024-12-06 19:30:12.684387] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:27.898 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.898 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:27.898 19:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:28.156 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:28.156 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:28.156 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:28.156 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:28.156 19:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:30.062 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:30.062 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:30.062 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:30.062 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:30.062 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:30.062 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:30.062 19:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:30.062 [global] 00:31:30.062 thread=1 00:31:30.062 invalidate=1 00:31:30.062 rw=write 00:31:30.062 time_based=1 00:31:30.062 runtime=1 00:31:30.062 ioengine=libaio 00:31:30.062 direct=1 00:31:30.062 bs=4096 00:31:30.062 iodepth=1 00:31:30.062 norandommap=0 00:31:30.062 numjobs=1 00:31:30.062 00:31:30.062 verify_dump=1 00:31:30.062 verify_backlog=512 00:31:30.062 verify_state_save=0 00:31:30.062 do_verify=1 00:31:30.062 verify=crc32c-intel 00:31:30.062 [job0] 00:31:30.062 filename=/dev/nvme0n1 00:31:30.062 Could not set queue depth (nvme0n1) 00:31:30.319 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:30.319 fio-3.35 00:31:30.319 Starting 1 thread 00:31:31.694 00:31:31.694 job0: (groupid=0, jobs=1): err= 0: pid=372554: Fri Dec 6 19:30:16 2024 00:31:31.694 read: IOPS=342, BW=1369KiB/s (1402kB/s)(1392KiB/1017msec) 00:31:31.694 slat (nsec): min=6771, max=61928, avg=18469.37, stdev=5463.64 00:31:31.694 clat (usec): min=259, max=42001, avg=2549.30, stdev=9271.45 00:31:31.694 lat (usec): min=273, max=42014, avg=2567.77, stdev=9271.72 00:31:31.694 clat percentiles (usec): 00:31:31.694 | 1.00th=[ 262], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:31:31.694 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 297], 00:31:31.694 | 70.00th=[ 322], 80.00th=[ 469], 90.00th=[ 498], 95.00th=[40633], 00:31:31.694 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:31:31.694 | 99.99th=[42206] 00:31:31.694 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:31:31.694 slat (usec): min=6, max=28001, avg=62.53, stdev=1237.16 00:31:31.694 clat (usec): min=137, max=282, avg=170.82, stdev=37.60 00:31:31.694 lat (usec): min=144, max=28259, avg=233.36, stdev=1241.61 00:31:31.694 clat percentiles (usec): 00:31:31.694 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 143], 20.00th=[ 145], 00:31:31.694 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 163], 00:31:31.694 | 70.00th=[ 169], 80.00th=[ 184], 90.00th=[ 245], 95.00th=[ 247], 00:31:31.694 | 99.00th=[ 258], 99.50th=[ 262], 99.90th=[ 281], 99.95th=[ 281], 00:31:31.694 | 99.99th=[ 281] 00:31:31.694 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:31.694 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:31.694 lat (usec) : 250=58.02%, 500=38.14%, 750=1.63% 00:31:31.694 lat (msec) : 50=2.21% 00:31:31.694 cpu : usr=0.98%, sys=0.89%, ctx=863, majf=0, minf=1 00:31:31.694 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.694 issued rwts: total=348,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.694 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:31.694 00:31:31.694 Run status group 0 (all jobs): 00:31:31.694 READ: bw=1369KiB/s (1402kB/s), 1369KiB/s-1369KiB/s (1402kB/s-1402kB/s), io=1392KiB (1425kB), run=1017-1017msec 00:31:31.694 WRITE: bw=2014KiB/s (2062kB/s), 2014KiB/s-2014KiB/s (2062kB/s-2062kB/s), io=2048KiB (2097kB), run=1017-1017msec 00:31:31.694 00:31:31.694 Disk stats (read/write): 00:31:31.694 nvme0n1: ios=371/512, merge=0/0, ticks=1744/87, in_queue=1831, util=98.80% 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:31.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:31.694 rmmod nvme_tcp 00:31:31.694 rmmod nvme_fabrics 00:31:31.694 rmmod nvme_keyring 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 372051 ']' 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 372051 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 372051 ']' 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 372051 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 372051 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 372051' 00:31:31.694 killing process with pid 372051 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 372051 00:31:31.694 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 372051 00:31:31.952 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:31.952 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:31.952 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:31.952 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:31.952 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:31.952 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:31.952 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:31.952 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:31.952 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:31.952 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.952 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:31.952 19:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.856 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:33.856 00:31:33.856 real 0m9.091s 00:31:33.856 user 0m16.788s 00:31:33.856 sys 0m3.206s 00:31:33.856 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:33.856 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:33.856 ************************************ 00:31:33.856 END TEST nvmf_nmic 00:31:33.856 ************************************ 00:31:34.116 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:34.116 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:34.116 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:34.116 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:34.116 ************************************ 00:31:34.116 START TEST nvmf_fio_target 00:31:34.116 ************************************ 00:31:34.116 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:34.116 * Looking for test storage... 00:31:34.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:34.116 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:34.116 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:31:34.116 19:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:34.116 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:34.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.116 --rc genhtml_branch_coverage=1 00:31:34.116 --rc genhtml_function_coverage=1 00:31:34.116 --rc genhtml_legend=1 00:31:34.117 --rc geninfo_all_blocks=1 00:31:34.117 --rc geninfo_unexecuted_blocks=1 00:31:34.117 00:31:34.117 ' 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:34.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.117 --rc genhtml_branch_coverage=1 00:31:34.117 --rc genhtml_function_coverage=1 00:31:34.117 --rc genhtml_legend=1 00:31:34.117 --rc geninfo_all_blocks=1 00:31:34.117 --rc geninfo_unexecuted_blocks=1 00:31:34.117 00:31:34.117 ' 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:34.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.117 --rc genhtml_branch_coverage=1 00:31:34.117 --rc genhtml_function_coverage=1 00:31:34.117 --rc genhtml_legend=1 00:31:34.117 --rc geninfo_all_blocks=1 00:31:34.117 --rc geninfo_unexecuted_blocks=1 00:31:34.117 00:31:34.117 ' 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:34.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.117 --rc genhtml_branch_coverage=1 00:31:34.117 --rc genhtml_function_coverage=1 00:31:34.117 --rc genhtml_legend=1 00:31:34.117 --rc geninfo_all_blocks=1 00:31:34.117 --rc geninfo_unexecuted_blocks=1 00:31:34.117 00:31:34.117 ' 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:34.117 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:34.118 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:34.118 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:34.118 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:34.118 19:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:36.650 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:36.650 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.650 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:36.651 Found net devices under 0000:84:00.0: cvl_0_0 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:36.651 Found net devices under 0000:84:00.1: cvl_0_1 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:36.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:36.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:31:36.651 00:31:36.651 --- 10.0.0.2 ping statistics --- 00:31:36.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.651 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:36.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:36.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:31:36.651 00:31:36.651 --- 10.0.0.1 ping statistics --- 00:31:36.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.651 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=374646 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 374646 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 374646 ']' 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:36.651 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:36.651 [2024-12-06 19:30:21.443637] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:36.651 [2024-12-06 19:30:21.444749] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:31:36.651 [2024-12-06 19:30:21.444804] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.651 [2024-12-06 19:30:21.518537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:36.651 [2024-12-06 19:30:21.579476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.651 [2024-12-06 19:30:21.579560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.651 [2024-12-06 19:30:21.579574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.651 [2024-12-06 19:30:21.579584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.651 [2024-12-06 19:30:21.579594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.651 [2024-12-06 19:30:21.581357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.651 [2024-12-06 19:30:21.581421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:36.651 [2024-12-06 19:30:21.581487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:36.651 [2024-12-06 19:30:21.581490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.651 [2024-12-06 19:30:21.681255] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:36.651 [2024-12-06 19:30:21.681475] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:36.651 [2024-12-06 19:30:21.681794] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:36.651 [2024-12-06 19:30:21.682447] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:36.651 [2024-12-06 19:30:21.682646] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:36.910 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.910 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:36.910 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:36.910 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:36.910 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:36.910 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.910 19:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:37.169 [2024-12-06 19:30:21.994247] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.169 19:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:37.429 19:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:37.429 19:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:37.688 19:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:37.688 19:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:37.946 19:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:37.946 19:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:38.204 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:38.204 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:38.462 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:38.722 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:38.722 19:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:39.288 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:39.288 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:39.547 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:39.547 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:39.806 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:40.064 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:40.064 19:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:40.348 19:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:40.348 19:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:40.616 19:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:40.901 [2024-12-06 19:30:25.834390] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.901 19:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:41.207 19:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:41.519 19:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:41.795 19:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:41.795 19:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:41.795 19:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:41.795 19:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:41.795 19:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:41.795 19:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:43.738 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:43.738 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:43.738 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:43.738 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:43.738 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:43.738 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:43.738 19:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:43.738 [global] 00:31:43.738 thread=1 00:31:43.738 invalidate=1 00:31:43.738 rw=write 00:31:43.738 time_based=1 00:31:43.738 runtime=1 00:31:43.738 ioengine=libaio 00:31:43.738 direct=1 00:31:43.738 bs=4096 00:31:43.738 iodepth=1 00:31:43.738 norandommap=0 00:31:43.738 numjobs=1 00:31:43.738 00:31:43.738 verify_dump=1 00:31:43.738 verify_backlog=512 00:31:43.738 verify_state_save=0 00:31:43.738 do_verify=1 00:31:43.738 verify=crc32c-intel 00:31:43.738 [job0] 00:31:43.738 filename=/dev/nvme0n1 00:31:43.738 [job1] 00:31:43.738 filename=/dev/nvme0n2 00:31:43.738 [job2] 00:31:43.738 filename=/dev/nvme0n3 00:31:43.738 [job3] 00:31:43.738 filename=/dev/nvme0n4 00:31:43.738 Could not set queue depth (nvme0n1) 00:31:43.738 Could not set queue depth (nvme0n2) 00:31:43.738 Could not set queue depth (nvme0n3) 00:31:43.738 Could not set queue depth (nvme0n4) 00:31:43.995 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:43.995 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:43.995 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:43.995 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:43.995 fio-3.35 00:31:43.995 Starting 4 threads 00:31:45.371 00:31:45.371 job0: (groupid=0, jobs=1): err= 0: pid=375731: Fri Dec 6 19:30:30 2024 00:31:45.371 read: IOPS=22, BW=88.5KiB/s (90.6kB/s)(92.0KiB/1040msec) 00:31:45.371 slat (nsec): min=7549, max=35700, avg=16518.22, stdev=7062.76 00:31:45.371 clat (usec): min=537, max=41983, avg=39248.51, stdev=8443.37 00:31:45.371 lat (usec): min=544, max=41996, avg=39265.03, stdev=8445.29 00:31:45.371 clat percentiles (usec): 00:31:45.371 | 1.00th=[ 537], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:45.371 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:45.371 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:45.371 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:45.371 | 99.99th=[42206] 00:31:45.371 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:31:45.371 slat (nsec): min=7543, max=25470, avg=9098.02, stdev=2262.77 00:31:45.371 clat (usec): min=192, max=356, avg=255.40, stdev=39.32 00:31:45.371 lat (usec): min=200, max=367, avg=264.49, stdev=39.78 00:31:45.371 clat percentiles (usec): 00:31:45.371 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 219], 00:31:45.371 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 245], 60.00th=[ 265], 00:31:45.371 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 322], 00:31:45.371 | 99.00th=[ 343], 99.50th=[ 347], 99.90th=[ 359], 99.95th=[ 359], 00:31:45.371 | 99.99th=[ 359] 00:31:45.371 bw ( KiB/s): min= 4096, max= 4096, per=17.15%, avg=4096.00, stdev= 0.00, samples=1 00:31:45.371 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:45.371 lat (usec) : 250=52.34%, 500=43.36%, 750=0.19% 00:31:45.371 lat (msec) : 50=4.11% 00:31:45.371 cpu : usr=0.10%, sys=0.77%, ctx=535, majf=0, minf=1 00:31:45.371 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.371 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.371 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:45.371 job1: (groupid=0, jobs=1): err= 0: pid=375733: Fri Dec 6 19:30:30 2024 00:31:45.371 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:31:45.371 slat (nsec): min=6209, max=88779, avg=10397.59, stdev=5553.97 00:31:45.371 clat (usec): min=193, max=1096, avg=248.18, stdev=44.61 00:31:45.371 lat (usec): min=199, max=1104, avg=258.57, stdev=46.94 00:31:45.371 clat percentiles (usec): 00:31:45.371 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 215], 00:31:45.371 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 249], 00:31:45.371 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 310], 00:31:45.371 | 99.00th=[ 359], 99.50th=[ 388], 99.90th=[ 693], 99.95th=[ 873], 00:31:45.371 | 99.99th=[ 1090] 00:31:45.371 write: IOPS=2491, BW=9966KiB/s (10.2MB/s)(9976KiB/1001msec); 0 zone resets 00:31:45.371 slat (nsec): min=7969, max=37432, avg=10655.68, stdev=3553.01 00:31:45.371 clat (usec): min=136, max=1522, avg=172.53, stdev=36.11 00:31:45.371 lat (usec): min=145, max=1535, avg=183.19, stdev=37.30 00:31:45.371 clat percentiles (usec): 00:31:45.371 | 1.00th=[ 143], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 153], 00:31:45.371 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 172], 00:31:45.371 | 70.00th=[ 180], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 217], 00:31:45.371 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 343], 99.95th=[ 469], 00:31:45.371 | 99.99th=[ 1516] 00:31:45.371 bw ( KiB/s): min=11704, max=11704, per=49.00%, avg=11704.00, stdev= 0.00, samples=1 00:31:45.371 iops : min= 2926, max= 2926, avg=2926.00, stdev= 0.00, samples=1 00:31:45.371 lat (usec) : 250=81.75%, 500=18.14%, 750=0.04%, 1000=0.02% 00:31:45.371 lat (msec) : 2=0.04% 00:31:45.371 cpu : usr=3.20%, sys=6.50%, ctx=4543, majf=0, minf=1 00:31:45.371 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.371 issued rwts: total=2048,2494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.371 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:45.371 job2: (groupid=0, jobs=1): err= 0: pid=375734: Fri Dec 6 19:30:30 2024 00:31:45.371 read: IOPS=1573, BW=6293KiB/s (6444kB/s)(6356KiB/1010msec) 00:31:45.371 slat (nsec): min=7112, max=51689, avg=8987.62, stdev=3468.09 00:31:45.371 clat (usec): min=199, max=41441, avg=306.95, stdev=1033.91 00:31:45.371 lat (usec): min=206, max=41457, avg=315.93, stdev=1034.11 00:31:45.371 clat percentiles (usec): 00:31:45.371 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 233], 20.00th=[ 245], 00:31:45.371 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 285], 00:31:45.371 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 351], 00:31:45.371 | 99.00th=[ 502], 99.50th=[ 510], 99.90th=[ 537], 99.95th=[41681], 00:31:45.371 | 99.99th=[41681] 00:31:45.371 write: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec); 0 zone resets 00:31:45.371 slat (nsec): min=9253, max=54256, avg=14692.89, stdev=6681.27 00:31:45.371 clat (usec): min=128, max=1109, avg=227.23, stdev=53.58 00:31:45.371 lat (usec): min=157, max=1122, avg=241.93, stdev=55.81 00:31:45.371 clat percentiles (usec): 00:31:45.371 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 184], 00:31:45.371 | 30.00th=[ 202], 40.00th=[ 212], 50.00th=[ 225], 60.00th=[ 235], 00:31:45.371 | 70.00th=[ 245], 80.00th=[ 260], 90.00th=[ 289], 95.00th=[ 306], 00:31:45.371 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 840], 99.95th=[ 955], 00:31:45.371 | 99.99th=[ 1106] 00:31:45.371 bw ( KiB/s): min= 8192, max= 8192, per=34.30%, avg=8192.00, stdev= 0.00, samples=2 00:31:45.371 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:31:45.371 lat (usec) : 250=54.74%, 500=44.62%, 750=0.49%, 1000=0.08% 00:31:45.371 lat (msec) : 2=0.03%, 50=0.03% 00:31:45.371 cpu : usr=3.37%, sys=5.35%, ctx=3638, majf=0, minf=1 00:31:45.371 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.371 issued rwts: total=1589,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.371 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:45.371 job3: (groupid=0, jobs=1): err= 0: pid=375735: Fri Dec 6 19:30:30 2024 00:31:45.371 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:31:45.371 slat (nsec): min=5091, max=36060, avg=10834.98, stdev=5186.62 00:31:45.371 clat (usec): min=211, max=41054, avg=703.01, stdev=3952.89 00:31:45.371 lat (usec): min=216, max=41067, avg=713.84, stdev=3953.24 00:31:45.371 clat percentiles (usec): 00:31:45.371 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 245], 00:31:45.371 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 289], 00:31:45.371 | 70.00th=[ 310], 80.00th=[ 347], 90.00th=[ 469], 95.00th=[ 498], 00:31:45.371 | 99.00th=[ 7046], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:45.371 | 99.99th=[41157] 00:31:45.371 write: IOPS=1154, BW=4619KiB/s (4730kB/s)(4624KiB/1001msec); 0 zone resets 00:31:45.371 slat (nsec): min=7600, max=42738, avg=12755.64, stdev=5081.09 00:31:45.371 clat (usec): min=146, max=349, avg=213.82, stdev=45.15 00:31:45.371 lat (usec): min=155, max=361, avg=226.57, stdev=46.29 00:31:45.371 clat percentiles (usec): 00:31:45.371 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 174], 00:31:45.371 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 204], 60.00th=[ 217], 00:31:45.371 | 70.00th=[ 227], 80.00th=[ 255], 90.00th=[ 289], 95.00th=[ 302], 00:31:45.371 | 99.00th=[ 318], 99.50th=[ 322], 99.90th=[ 334], 99.95th=[ 351], 00:31:45.371 | 99.99th=[ 351] 00:31:45.371 bw ( KiB/s): min= 4096, max= 4096, per=17.15%, avg=4096.00, stdev= 0.00, samples=1 00:31:45.371 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:45.371 lat (usec) : 250=56.01%, 500=41.79%, 750=1.61% 00:31:45.371 lat (msec) : 2=0.09%, 10=0.05%, 50=0.46% 00:31:45.371 cpu : usr=1.40%, sys=2.50%, ctx=2183, majf=0, minf=1 00:31:45.371 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.371 issued rwts: total=1024,1156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.371 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:45.371 00:31:45.371 Run status group 0 (all jobs): 00:31:45.371 READ: bw=17.6MiB/s (18.4MB/s), 88.5KiB/s-8184KiB/s (90.6kB/s-8380kB/s), io=18.3MiB (19.2MB), run=1001-1040msec 00:31:45.371 WRITE: bw=23.3MiB/s (24.5MB/s), 1969KiB/s-9966KiB/s (2016kB/s-10.2MB/s), io=24.3MiB (25.4MB), run=1001-1040msec 00:31:45.371 00:31:45.371 Disk stats (read/write): 00:31:45.371 nvme0n1: ios=68/512, merge=0/0, ticks=732/127, in_queue=859, util=86.57% 00:31:45.371 nvme0n2: ios=1860/2048, merge=0/0, ticks=1433/345, in_queue=1778, util=97.76% 00:31:45.371 nvme0n3: ios=1536/1627, merge=0/0, ticks=424/345, in_queue=769, util=88.89% 00:31:45.371 nvme0n4: ios=697/1024, merge=0/0, ticks=1212/223, in_queue=1435, util=97.68% 00:31:45.372 19:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:45.372 [global] 00:31:45.372 thread=1 00:31:45.372 invalidate=1 00:31:45.372 rw=randwrite 00:31:45.372 time_based=1 00:31:45.372 runtime=1 00:31:45.372 ioengine=libaio 00:31:45.372 direct=1 00:31:45.372 bs=4096 00:31:45.372 iodepth=1 00:31:45.372 norandommap=0 00:31:45.372 numjobs=1 00:31:45.372 00:31:45.372 verify_dump=1 00:31:45.372 verify_backlog=512 00:31:45.372 verify_state_save=0 00:31:45.372 do_verify=1 00:31:45.372 verify=crc32c-intel 00:31:45.372 [job0] 00:31:45.372 filename=/dev/nvme0n1 00:31:45.372 [job1] 00:31:45.372 filename=/dev/nvme0n2 00:31:45.372 [job2] 00:31:45.372 filename=/dev/nvme0n3 00:31:45.372 [job3] 00:31:45.372 filename=/dev/nvme0n4 00:31:45.372 Could not set queue depth (nvme0n1) 00:31:45.372 Could not set queue depth (nvme0n2) 00:31:45.372 Could not set queue depth (nvme0n3) 00:31:45.372 Could not set queue depth (nvme0n4) 00:31:45.372 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:45.372 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:45.372 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:45.372 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:45.372 fio-3.35 00:31:45.372 Starting 4 threads 00:31:46.746 00:31:46.746 job0: (groupid=0, jobs=1): err= 0: pid=375955: Fri Dec 6 19:30:31 2024 00:31:46.747 read: IOPS=509, BW=2039KiB/s (2087kB/s)(2116KiB/1038msec) 00:31:46.747 slat (nsec): min=4489, max=26645, avg=6853.34, stdev=3498.19 00:31:46.747 clat (usec): min=219, max=41967, avg=1573.83, stdev=7196.32 00:31:46.747 lat (usec): min=223, max=41981, avg=1580.69, stdev=7197.79 00:31:46.747 clat percentiles (usec): 00:31:46.747 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:31:46.747 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 255], 00:31:46.747 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 306], 95.00th=[ 502], 00:31:46.747 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:31:46.747 | 99.99th=[42206] 00:31:46.747 write: IOPS=986, BW=3946KiB/s (4041kB/s)(4096KiB/1038msec); 0 zone resets 00:31:46.747 slat (nsec): min=6210, max=38360, avg=8900.17, stdev=3471.25 00:31:46.747 clat (usec): min=136, max=616, avg=182.30, stdev=31.93 00:31:46.747 lat (usec): min=154, max=634, avg=191.20, stdev=32.73 00:31:46.747 clat percentiles (usec): 00:31:46.747 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:31:46.747 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:31:46.747 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 212], 95.00th=[ 251], 00:31:46.747 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 420], 99.95th=[ 619], 00:31:46.747 | 99.99th=[ 619] 00:31:46.747 bw ( KiB/s): min= 8192, max= 8192, per=35.38%, avg=8192.00, stdev= 0.00, samples=1 00:31:46.747 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:46.747 lat (usec) : 250=78.17%, 500=20.03%, 750=0.71% 00:31:46.747 lat (msec) : 50=1.09% 00:31:46.747 cpu : usr=0.48%, sys=1.45%, ctx=1555, majf=0, minf=1 00:31:46.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.747 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.747 job1: (groupid=0, jobs=1): err= 0: pid=375956: Fri Dec 6 19:30:31 2024 00:31:46.747 read: IOPS=1647, BW=6589KiB/s (6748kB/s)(6596KiB/1001msec) 00:31:46.747 slat (nsec): min=6416, max=21715, avg=7661.39, stdev=1396.75 00:31:46.747 clat (usec): min=180, max=41326, avg=316.79, stdev=1420.94 00:31:46.747 lat (usec): min=187, max=41334, avg=324.46, stdev=1420.96 00:31:46.747 clat percentiles (usec): 00:31:46.747 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 231], 00:31:46.747 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 277], 00:31:46.747 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[ 338], 00:31:46.747 | 99.00th=[ 474], 99.50th=[ 519], 99.90th=[40633], 99.95th=[41157], 00:31:46.747 | 99.99th=[41157] 00:31:46.747 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:31:46.747 slat (usec): min=8, max=8316, avg=14.18, stdev=183.55 00:31:46.747 clat (usec): min=131, max=470, avg=208.13, stdev=58.10 00:31:46.747 lat (usec): min=142, max=8555, avg=222.31, stdev=193.38 00:31:46.747 clat percentiles (usec): 00:31:46.747 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:31:46.747 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 182], 60.00th=[ 196], 00:31:46.747 | 70.00th=[ 227], 80.00th=[ 253], 90.00th=[ 310], 95.00th=[ 330], 00:31:46.747 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 429], 99.95th=[ 437], 00:31:46.747 | 99.99th=[ 469] 00:31:46.747 bw ( KiB/s): min= 8192, max= 8192, per=35.38%, avg=8192.00, stdev= 0.00, samples=1 00:31:46.747 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:46.747 lat (usec) : 250=63.78%, 500=35.87%, 750=0.30% 00:31:46.747 lat (msec) : 50=0.05% 00:31:46.747 cpu : usr=1.90%, sys=4.90%, ctx=3699, majf=0, minf=1 00:31:46.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.747 issued rwts: total=1649,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.747 job2: (groupid=0, jobs=1): err= 0: pid=375957: Fri Dec 6 19:30:31 2024 00:31:46.747 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:31:46.747 slat (nsec): min=7611, max=28529, avg=8731.79, stdev=1507.39 00:31:46.747 clat (usec): min=198, max=40989, avg=353.47, stdev=1464.69 00:31:46.747 lat (usec): min=206, max=40998, avg=362.21, stdev=1464.68 00:31:46.747 clat percentiles (usec): 00:31:46.747 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 247], 00:31:46.747 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 285], 60.00th=[ 297], 00:31:46.747 | 70.00th=[ 318], 80.00th=[ 351], 90.00th=[ 392], 95.00th=[ 445], 00:31:46.747 | 99.00th=[ 537], 99.50th=[ 611], 99.90th=[40633], 99.95th=[41157], 00:31:46.747 | 99.99th=[41157] 00:31:46.747 write: IOPS=1791, BW=7165KiB/s (7337kB/s)(7172KiB/1001msec); 0 zone resets 00:31:46.747 slat (nsec): min=8414, max=32759, avg=11299.95, stdev=2390.27 00:31:46.747 clat (usec): min=143, max=453, avg=230.93, stdev=54.78 00:31:46.747 lat (usec): min=152, max=472, avg=242.23, stdev=55.43 00:31:46.747 clat percentiles (usec): 00:31:46.747 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 186], 00:31:46.747 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 219], 60.00th=[ 231], 00:31:46.747 | 70.00th=[ 243], 80.00th=[ 277], 90.00th=[ 318], 95.00th=[ 347], 00:31:46.747 | 99.00th=[ 379], 99.50th=[ 400], 99.90th=[ 449], 99.95th=[ 453], 00:31:46.747 | 99.99th=[ 453] 00:31:46.747 bw ( KiB/s): min= 8192, max= 8192, per=35.38%, avg=8192.00, stdev= 0.00, samples=1 00:31:46.747 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:46.747 lat (usec) : 250=51.52%, 500=47.70%, 750=0.66%, 1000=0.06% 00:31:46.747 lat (msec) : 50=0.06% 00:31:46.747 cpu : usr=1.40%, sys=5.40%, ctx=3331, majf=0, minf=1 00:31:46.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.747 issued rwts: total=1536,1793,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.747 job3: (groupid=0, jobs=1): err= 0: pid=375958: Fri Dec 6 19:30:31 2024 00:31:46.747 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:31:46.747 slat (nsec): min=4743, max=25628, avg=9314.59, stdev=3495.92 00:31:46.747 clat (usec): min=214, max=41034, avg=651.60, stdev=3795.13 00:31:46.747 lat (usec): min=222, max=41044, avg=660.92, stdev=3795.46 00:31:46.747 clat percentiles (usec): 00:31:46.747 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 239], 00:31:46.747 | 30.00th=[ 247], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 302], 00:31:46.747 | 70.00th=[ 318], 80.00th=[ 343], 90.00th=[ 388], 95.00th=[ 412], 00:31:46.747 | 99.00th=[ 717], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:46.747 | 99.99th=[41157] 00:31:46.747 write: IOPS=1142, BW=4571KiB/s (4681kB/s)(4576KiB/1001msec); 0 zone resets 00:31:46.747 slat (nsec): min=6910, max=49372, avg=9930.30, stdev=3449.27 00:31:46.747 clat (usec): min=145, max=3951, avg=267.47, stdev=137.59 00:31:46.747 lat (usec): min=153, max=3962, avg=277.40, stdev=138.15 00:31:46.747 clat percentiles (usec): 00:31:46.747 | 1.00th=[ 163], 5.00th=[ 178], 10.00th=[ 194], 20.00th=[ 208], 00:31:46.747 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 233], 60.00th=[ 243], 00:31:46.747 | 70.00th=[ 269], 80.00th=[ 326], 90.00th=[ 400], 95.00th=[ 445], 00:31:46.747 | 99.00th=[ 486], 99.50th=[ 506], 99.90th=[ 824], 99.95th=[ 3949], 00:31:46.747 | 99.99th=[ 3949] 00:31:46.747 bw ( KiB/s): min= 4096, max= 4096, per=17.69%, avg=4096.00, stdev= 0.00, samples=1 00:31:46.747 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:46.747 lat (usec) : 250=49.54%, 500=49.08%, 750=0.83%, 1000=0.05% 00:31:46.747 lat (msec) : 2=0.05%, 4=0.05%, 50=0.42% 00:31:46.747 cpu : usr=1.30%, sys=2.30%, ctx=2168, majf=0, minf=2 00:31:46.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.747 issued rwts: total=1024,1144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.747 00:31:46.747 Run status group 0 (all jobs): 00:31:46.747 READ: bw=17.8MiB/s (18.7MB/s), 2039KiB/s-6589KiB/s (2087kB/s-6748kB/s), io=18.5MiB (19.4MB), run=1001-1038msec 00:31:46.747 WRITE: bw=22.6MiB/s (23.7MB/s), 3946KiB/s-8184KiB/s (4041kB/s-8380kB/s), io=23.5MiB (24.6MB), run=1001-1038msec 00:31:46.747 00:31:46.747 Disk stats (read/write): 00:31:46.747 nvme0n1: ios=568/1024, merge=0/0, ticks=965/188, in_queue=1153, util=96.79% 00:31:46.747 nvme0n2: ios=1441/1536, merge=0/0, ticks=1155/334, in_queue=1489, util=96.95% 00:31:46.747 nvme0n3: ios=1207/1536, merge=0/0, ticks=1079/351, in_queue=1430, util=96.76% 00:31:46.747 nvme0n4: ios=543/1024, merge=0/0, ticks=547/277, in_queue=824, util=89.70% 00:31:46.747 19:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:46.747 [global] 00:31:46.747 thread=1 00:31:46.747 invalidate=1 00:31:46.747 rw=write 00:31:46.747 time_based=1 00:31:46.747 runtime=1 00:31:46.747 ioengine=libaio 00:31:46.747 direct=1 00:31:46.747 bs=4096 00:31:46.747 iodepth=128 00:31:46.747 norandommap=0 00:31:46.747 numjobs=1 00:31:46.747 00:31:46.747 verify_dump=1 00:31:46.747 verify_backlog=512 00:31:46.747 verify_state_save=0 00:31:46.747 do_verify=1 00:31:46.747 verify=crc32c-intel 00:31:46.747 [job0] 00:31:46.747 filename=/dev/nvme0n1 00:31:46.747 [job1] 00:31:46.747 filename=/dev/nvme0n2 00:31:46.747 [job2] 00:31:46.747 filename=/dev/nvme0n3 00:31:46.747 [job3] 00:31:46.747 filename=/dev/nvme0n4 00:31:46.747 Could not set queue depth (nvme0n1) 00:31:46.747 Could not set queue depth (nvme0n2) 00:31:46.747 Could not set queue depth (nvme0n3) 00:31:46.747 Could not set queue depth (nvme0n4) 00:31:47.007 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:47.007 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:47.007 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:47.007 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:47.007 fio-3.35 00:31:47.007 Starting 4 threads 00:31:48.388 00:31:48.388 job0: (groupid=0, jobs=1): err= 0: pid=376184: Fri Dec 6 19:30:33 2024 00:31:48.388 read: IOPS=4418, BW=17.3MiB/s (18.1MB/s)(17.3MiB/1002msec) 00:31:48.388 slat (usec): min=2, max=24041, avg=112.08, stdev=935.86 00:31:48.388 clat (usec): min=1197, max=45805, avg=14713.16, stdev=5099.25 00:31:48.388 lat (usec): min=1203, max=48093, avg=14825.24, stdev=5167.40 00:31:48.388 clat percentiles (usec): 00:31:48.388 | 1.00th=[ 4146], 5.00th=[ 8717], 10.00th=[ 9896], 20.00th=[11076], 00:31:48.388 | 30.00th=[11600], 40.00th=[12911], 50.00th=[13566], 60.00th=[15533], 00:31:48.388 | 70.00th=[16712], 80.00th=[17171], 90.00th=[20841], 95.00th=[22676], 00:31:48.388 | 99.00th=[33817], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:31:48.388 | 99.99th=[45876] 00:31:48.388 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:31:48.388 slat (usec): min=3, max=15043, avg=99.71, stdev=800.12 00:31:48.388 clat (usec): min=3957, max=40041, avg=13226.65, stdev=4179.42 00:31:48.388 lat (usec): min=5014, max=40054, avg=13326.35, stdev=4243.04 00:31:48.388 clat percentiles (usec): 00:31:48.388 | 1.00th=[ 6063], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10683], 00:31:48.388 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[12649], 00:31:48.388 | 70.00th=[13960], 80.00th=[15139], 90.00th=[18220], 95.00th=[22938], 00:31:48.388 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31065], 99.95th=[31065], 00:31:48.388 | 99.99th=[40109] 00:31:48.388 bw ( KiB/s): min=17932, max=17932, per=27.00%, avg=17932.00, stdev= 0.00, samples=1 00:31:48.388 iops : min= 4483, max= 4483, avg=4483.00, stdev= 0.00, samples=1 00:31:48.388 lat (msec) : 2=0.10%, 4=0.35%, 10=11.36%, 20=79.31%, 50=8.88% 00:31:48.388 cpu : usr=3.70%, sys=5.79%, ctx=258, majf=0, minf=1 00:31:48.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:48.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.388 issued rwts: total=4427,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.388 job1: (groupid=0, jobs=1): err= 0: pid=376185: Fri Dec 6 19:30:33 2024 00:31:48.388 read: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec) 00:31:48.388 slat (usec): min=2, max=14261, avg=87.33, stdev=714.81 00:31:48.388 clat (usec): min=3617, max=28395, avg=11435.32, stdev=2928.39 00:31:48.388 lat (usec): min=3622, max=28473, avg=11522.65, stdev=2992.15 00:31:48.388 clat percentiles (usec): 00:31:48.388 | 1.00th=[ 6259], 5.00th=[ 7242], 10.00th=[ 8356], 20.00th=[ 9634], 00:31:48.388 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10683], 60.00th=[10945], 00:31:48.388 | 70.00th=[12125], 80.00th=[14091], 90.00th=[15664], 95.00th=[16909], 00:31:48.388 | 99.00th=[20055], 99.50th=[20579], 99.90th=[21365], 99.95th=[21365], 00:31:48.388 | 99.99th=[28443] 00:31:48.388 write: IOPS=5049, BW=19.7MiB/s (20.7MB/s)(20.0MiB/1012msec); 0 zone resets 00:31:48.388 slat (usec): min=3, max=12886, avg=107.36, stdev=737.57 00:31:48.388 clat (usec): min=373, max=113787, avg=14728.50, stdev=15323.34 00:31:48.388 lat (usec): min=389, max=113796, avg=14835.87, stdev=15425.11 00:31:48.388 clat percentiles (msec): 00:31:48.388 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:31:48.388 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:31:48.388 | 70.00th=[ 12], 80.00th=[ 14], 90.00th=[ 17], 95.00th=[ 43], 00:31:48.388 | 99.00th=[ 101], 99.50th=[ 103], 99.90th=[ 114], 99.95th=[ 114], 00:31:48.388 | 99.99th=[ 114] 00:31:48.388 bw ( KiB/s): min=15304, max=24560, per=30.01%, avg=19932.00, stdev=6544.98, samples=2 00:31:48.388 iops : min= 3826, max= 6140, avg=4983.00, stdev=1636.25, samples=2 00:31:48.388 lat (usec) : 500=0.02%, 750=0.05% 00:31:48.388 lat (msec) : 2=0.10%, 4=0.61%, 10=26.23%, 20=67.95%, 50=3.00% 00:31:48.388 lat (msec) : 100=1.43%, 250=0.61% 00:31:48.388 cpu : usr=2.97%, sys=6.13%, ctx=380, majf=0, minf=2 00:31:48.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:48.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.388 issued rwts: total=4608,5110,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.388 job2: (groupid=0, jobs=1): err= 0: pid=376186: Fri Dec 6 19:30:33 2024 00:31:48.388 read: IOPS=4131, BW=16.1MiB/s (16.9MB/s)(16.5MiB/1024msec) 00:31:48.388 slat (usec): min=2, max=10559, avg=97.03, stdev=608.44 00:31:48.388 clat (usec): min=6504, max=48592, avg=13914.52, stdev=5380.78 00:31:48.388 lat (usec): min=6512, max=48596, avg=14011.54, stdev=5388.15 00:31:48.388 clat percentiles (usec): 00:31:48.388 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11469], 00:31:48.388 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[13173], 00:31:48.388 | 70.00th=[13566], 80.00th=[14484], 90.00th=[16188], 95.00th=[20055], 00:31:48.388 | 99.00th=[41157], 99.50th=[41157], 99.90th=[48497], 99.95th=[48497], 00:31:48.388 | 99.99th=[48497] 00:31:48.388 write: IOPS=4500, BW=17.6MiB/s (18.4MB/s)(18.0MiB/1024msec); 0 zone resets 00:31:48.388 slat (usec): min=3, max=41985, avg=115.98, stdev=988.03 00:31:48.388 clat (usec): min=4899, max=69815, avg=13272.91, stdev=5248.54 00:31:48.389 lat (usec): min=4907, max=91995, avg=13388.88, stdev=5424.98 00:31:48.389 clat percentiles (usec): 00:31:48.389 | 1.00th=[ 7242], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10945], 00:31:48.389 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:31:48.389 | 70.00th=[13042], 80.00th=[13829], 90.00th=[14877], 95.00th=[16712], 00:31:48.389 | 99.00th=[42206], 99.50th=[45351], 99.90th=[45351], 99.95th=[46924], 00:31:48.389 | 99.99th=[69731] 00:31:48.389 bw ( KiB/s): min=16384, max=20480, per=27.75%, avg=18432.00, stdev=2896.31, samples=2 00:31:48.389 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:31:48.389 lat (msec) : 10=8.69%, 20=86.80%, 50=4.49%, 100=0.02% 00:31:48.389 cpu : usr=4.20%, sys=6.84%, ctx=252, majf=0, minf=1 00:31:48.389 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:48.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.389 issued rwts: total=4231,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.389 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.389 job3: (groupid=0, jobs=1): err= 0: pid=376187: Fri Dec 6 19:30:33 2024 00:31:48.389 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:31:48.389 slat (usec): min=2, max=29054, avg=217.42, stdev=1681.30 00:31:48.389 clat (usec): min=9208, max=74519, avg=30171.30, stdev=17780.51 00:31:48.389 lat (usec): min=9213, max=74535, avg=30388.72, stdev=17930.20 00:31:48.389 clat percentiles (usec): 00:31:48.389 | 1.00th=[ 9241], 5.00th=[11207], 10.00th=[11338], 20.00th=[12911], 00:31:48.389 | 30.00th=[14353], 40.00th=[16712], 50.00th=[22938], 60.00th=[37487], 00:31:48.389 | 70.00th=[42206], 80.00th=[46924], 90.00th=[54789], 95.00th=[63701], 00:31:48.389 | 99.00th=[67634], 99.50th=[67634], 99.90th=[68682], 99.95th=[70779], 00:31:48.389 | 99.99th=[74974] 00:31:48.389 write: IOPS=2664, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1004msec); 0 zone resets 00:31:48.389 slat (usec): min=3, max=25189, avg=130.73, stdev=1183.99 00:31:48.389 clat (usec): min=560, max=69360, avg=18716.18, stdev=11713.48 00:31:48.389 lat (usec): min=566, max=69404, avg=18846.92, stdev=11831.12 00:31:48.389 clat percentiles (usec): 00:31:48.389 | 1.00th=[ 1975], 5.00th=[ 5932], 10.00th=[ 7898], 20.00th=[ 9634], 00:31:48.389 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14484], 60.00th=[16319], 00:31:48.389 | 70.00th=[17957], 80.00th=[25822], 90.00th=[34341], 95.00th=[44303], 00:31:48.389 | 99.00th=[53216], 99.50th=[57410], 99.90th=[58459], 99.95th=[67634], 00:31:48.389 | 99.99th=[69731] 00:31:48.389 bw ( KiB/s): min= 7360, max=13149, per=15.44%, avg=10254.50, stdev=4093.44, samples=2 00:31:48.389 iops : min= 1840, max= 3287, avg=2563.50, stdev=1023.18, samples=2 00:31:48.389 lat (usec) : 750=0.08% 00:31:48.389 lat (msec) : 2=0.46%, 4=0.63%, 10=11.44%, 20=46.40%, 50=31.16% 00:31:48.389 lat (msec) : 100=9.84% 00:31:48.389 cpu : usr=2.39%, sys=3.89%, ctx=152, majf=0, minf=1 00:31:48.389 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:48.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.389 issued rwts: total=2560,2675,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.389 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.389 00:31:48.389 Run status group 0 (all jobs): 00:31:48.389 READ: bw=60.4MiB/s (63.3MB/s), 9.96MiB/s-17.8MiB/s (10.4MB/s-18.7MB/s), io=61.8MiB (64.8MB), run=1002-1024msec 00:31:48.389 WRITE: bw=64.9MiB/s (68.0MB/s), 10.4MiB/s-19.7MiB/s (10.9MB/s-20.7MB/s), io=66.4MiB (69.6MB), run=1002-1024msec 00:31:48.389 00:31:48.389 Disk stats (read/write): 00:31:48.389 nvme0n1: ios=3623/3831, merge=0/0, ticks=44719/39575, in_queue=84294, util=98.70% 00:31:48.389 nvme0n2: ios=4657/4759, merge=0/0, ticks=49664/47927, in_queue=97591, util=88.43% 00:31:48.389 nvme0n3: ios=3604/3682, merge=0/0, ticks=19793/17954, in_queue=37747, util=95.64% 00:31:48.389 nvme0n4: ios=2182/2560, merge=0/0, ticks=35049/24977, in_queue=60026, util=100.00% 00:31:48.389 19:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:48.389 [global] 00:31:48.389 thread=1 00:31:48.389 invalidate=1 00:31:48.389 rw=randwrite 00:31:48.389 time_based=1 00:31:48.389 runtime=1 00:31:48.389 ioengine=libaio 00:31:48.389 direct=1 00:31:48.389 bs=4096 00:31:48.389 iodepth=128 00:31:48.389 norandommap=0 00:31:48.389 numjobs=1 00:31:48.389 00:31:48.389 verify_dump=1 00:31:48.389 verify_backlog=512 00:31:48.389 verify_state_save=0 00:31:48.389 do_verify=1 00:31:48.389 verify=crc32c-intel 00:31:48.389 [job0] 00:31:48.389 filename=/dev/nvme0n1 00:31:48.389 [job1] 00:31:48.389 filename=/dev/nvme0n2 00:31:48.389 [job2] 00:31:48.389 filename=/dev/nvme0n3 00:31:48.389 [job3] 00:31:48.389 filename=/dev/nvme0n4 00:31:48.389 Could not set queue depth (nvme0n1) 00:31:48.389 Could not set queue depth (nvme0n2) 00:31:48.389 Could not set queue depth (nvme0n3) 00:31:48.389 Could not set queue depth (nvme0n4) 00:31:48.389 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:48.389 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:48.389 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:48.389 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:48.389 fio-3.35 00:31:48.389 Starting 4 threads 00:31:49.770 00:31:49.770 job0: (groupid=0, jobs=1): err= 0: pid=376433: Fri Dec 6 19:30:34 2024 00:31:49.770 read: IOPS=4348, BW=17.0MiB/s (17.8MB/s)(17.8MiB/1045msec) 00:31:49.770 slat (usec): min=2, max=7640, avg=105.70, stdev=586.54 00:31:49.770 clat (usec): min=7207, max=66860, avg=14790.12, stdev=8374.24 00:31:49.770 lat (usec): min=7237, max=66865, avg=14895.81, stdev=8397.94 00:31:49.770 clat percentiles (usec): 00:31:49.770 | 1.00th=[ 8848], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11076], 00:31:49.770 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12387], 00:31:49.770 | 70.00th=[13435], 80.00th=[17171], 90.00th=[23200], 95.00th=[23987], 00:31:49.770 | 99.00th=[61080], 99.50th=[66323], 99.90th=[66847], 99.95th=[66847], 00:31:49.770 | 99.99th=[66847] 00:31:49.770 write: IOPS=4409, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1045msec); 0 zone resets 00:31:49.770 slat (usec): min=4, max=9114, avg=102.19, stdev=549.55 00:31:49.770 clat (usec): min=7360, max=27164, avg=13970.52, stdev=3997.31 00:31:49.770 lat (usec): min=7378, max=27214, avg=14072.71, stdev=4031.55 00:31:49.770 clat percentiles (usec): 00:31:49.770 | 1.00th=[ 8717], 5.00th=[10028], 10.00th=[10552], 20.00th=[11207], 00:31:49.770 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12125], 60.00th=[12518], 00:31:49.770 | 70.00th=[15401], 80.00th=[17957], 90.00th=[20579], 95.00th=[22676], 00:31:49.770 | 99.00th=[23987], 99.50th=[24511], 99.90th=[27132], 99.95th=[27132], 00:31:49.770 | 99.99th=[27132] 00:31:49.770 bw ( KiB/s): min=16384, max=20480, per=28.40%, avg=18432.00, stdev=2896.31, samples=2 00:31:49.770 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:31:49.770 lat (msec) : 10=5.08%, 20=81.68%, 50=12.22%, 100=1.03% 00:31:49.770 cpu : usr=5.94%, sys=8.81%, ctx=416, majf=0, minf=1 00:31:49.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:49.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:49.770 issued rwts: total=4544,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.770 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:49.770 job1: (groupid=0, jobs=1): err= 0: pid=376442: Fri Dec 6 19:30:34 2024 00:31:49.770 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:31:49.770 slat (usec): min=3, max=12297, avg=102.82, stdev=591.94 00:31:49.770 clat (usec): min=7848, max=36942, avg=14209.01, stdev=4482.03 00:31:49.770 lat (usec): min=7857, max=36957, avg=14311.83, stdev=4515.03 00:31:49.770 clat percentiles (usec): 00:31:49.770 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[10814], 20.00th=[11863], 00:31:49.770 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12780], 60.00th=[13304], 00:31:49.770 | 70.00th=[14222], 80.00th=[15926], 90.00th=[18220], 95.00th=[24773], 00:31:49.770 | 99.00th=[32375], 99.50th=[33424], 99.90th=[33817], 99.95th=[33817], 00:31:49.770 | 99.99th=[36963] 00:31:49.770 write: IOPS=4148, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1001msec); 0 zone resets 00:31:49.770 slat (usec): min=3, max=34353, avg=125.37, stdev=969.92 00:31:49.770 clat (usec): min=282, max=61175, avg=16231.84, stdev=9454.60 00:31:49.771 lat (usec): min=3351, max=61226, avg=16357.21, stdev=9516.36 00:31:49.771 clat percentiles (usec): 00:31:49.771 | 1.00th=[ 4817], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[11469], 00:31:49.771 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12911], 60.00th=[13304], 00:31:49.771 | 70.00th=[15008], 80.00th=[18482], 90.00th=[33817], 95.00th=[43779], 00:31:49.771 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52167], 99.95th=[52691], 00:31:49.771 | 99.99th=[61080] 00:31:49.771 bw ( KiB/s): min=16384, max=16384, per=25.25%, avg=16384.00, stdev= 0.00, samples=1 00:31:49.771 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:31:49.771 lat (usec) : 500=0.01% 00:31:49.771 lat (msec) : 4=0.34%, 10=7.55%, 20=79.51%, 50=12.04%, 100=0.55% 00:31:49.771 cpu : usr=3.60%, sys=9.80%, ctx=280, majf=0, minf=1 00:31:49.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:49.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:49.771 issued rwts: total=4096,4153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.771 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:49.771 job2: (groupid=0, jobs=1): err= 0: pid=376477: Fri Dec 6 19:30:34 2024 00:31:49.771 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:31:49.771 slat (usec): min=2, max=11168, avg=129.48, stdev=723.32 00:31:49.771 clat (usec): min=551, max=32066, avg=16717.13, stdev=5112.27 00:31:49.771 lat (usec): min=4353, max=32080, avg=16846.61, stdev=5125.74 00:31:49.771 clat percentiles (usec): 00:31:49.771 | 1.00th=[ 7635], 5.00th=[11076], 10.00th=[12256], 20.00th=[12780], 00:31:49.771 | 30.00th=[13829], 40.00th=[14353], 50.00th=[14746], 60.00th=[15270], 00:31:49.771 | 70.00th=[18744], 80.00th=[23200], 90.00th=[24249], 95.00th=[26608], 00:31:49.771 | 99.00th=[29492], 99.50th=[31589], 99.90th=[32113], 99.95th=[32113], 00:31:49.771 | 99.99th=[32113] 00:31:49.771 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:31:49.771 slat (usec): min=3, max=29213, avg=142.20, stdev=1047.67 00:31:49.771 clat (usec): min=6669, max=70381, avg=18699.30, stdev=9709.66 00:31:49.771 lat (usec): min=6685, max=70430, avg=18841.50, stdev=9759.00 00:31:49.771 clat percentiles (usec): 00:31:49.771 | 1.00th=[ 9634], 5.00th=[11600], 10.00th=[12387], 20.00th=[13173], 00:31:49.771 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[15926], 00:31:49.771 | 70.00th=[20579], 80.00th=[22938], 90.00th=[27395], 95.00th=[44303], 00:31:49.771 | 99.00th=[62653], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:31:49.771 | 99.99th=[70779] 00:31:49.771 bw ( KiB/s): min=12032, max=16640, per=22.09%, avg=14336.00, stdev=3258.35, samples=2 00:31:49.771 iops : min= 3008, max= 4160, avg=3584.00, stdev=814.59, samples=2 00:31:49.771 lat (usec) : 750=0.01% 00:31:49.771 lat (msec) : 10=2.09%, 20=69.24%, 50=27.83%, 100=0.82% 00:31:49.771 cpu : usr=2.79%, sys=6.27%, ctx=257, majf=0, minf=1 00:31:49.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:31:49.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:49.771 issued rwts: total=3577,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.771 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:49.771 job3: (groupid=0, jobs=1): err= 0: pid=376488: Fri Dec 6 19:30:34 2024 00:31:49.771 read: IOPS=4170, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1007msec) 00:31:49.771 slat (usec): min=2, max=14754, avg=109.03, stdev=830.95 00:31:49.771 clat (usec): min=4260, max=28925, avg=15043.75, stdev=3394.08 00:31:49.771 lat (usec): min=4273, max=28930, avg=15152.78, stdev=3444.17 00:31:49.771 clat percentiles (usec): 00:31:49.771 | 1.00th=[ 6325], 5.00th=[ 8291], 10.00th=[11731], 20.00th=[12649], 00:31:49.771 | 30.00th=[13173], 40.00th=[14615], 50.00th=[15008], 60.00th=[15664], 00:31:49.771 | 70.00th=[16319], 80.00th=[17171], 90.00th=[19268], 95.00th=[20579], 00:31:49.771 | 99.00th=[24773], 99.50th=[28443], 99.90th=[28967], 99.95th=[28967], 00:31:49.771 | 99.99th=[28967] 00:31:49.771 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:31:49.771 slat (usec): min=3, max=14857, avg=99.53, stdev=698.88 00:31:49.771 clat (usec): min=507, max=38057, avg=14018.18, stdev=4201.98 00:31:49.771 lat (usec): min=2912, max=38067, avg=14117.71, stdev=4237.26 00:31:49.771 clat percentiles (usec): 00:31:49.771 | 1.00th=[ 4490], 5.00th=[ 7701], 10.00th=[ 9372], 20.00th=[11863], 00:31:49.771 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13566], 60.00th=[14353], 00:31:49.771 | 70.00th=[15664], 80.00th=[16319], 90.00th=[17695], 95.00th=[22152], 00:31:49.771 | 99.00th=[29754], 99.50th=[33817], 99.90th=[38011], 99.95th=[38011], 00:31:49.771 | 99.99th=[38011] 00:31:49.771 bw ( KiB/s): min=17008, max=19672, per=28.26%, avg=18340.00, stdev=1883.73, samples=2 00:31:49.771 iops : min= 4252, max= 4918, avg=4585.00, stdev=470.93, samples=2 00:31:49.771 lat (usec) : 750=0.01% 00:31:49.771 lat (msec) : 4=0.37%, 10=8.24%, 20=83.95%, 50=7.43% 00:31:49.771 cpu : usr=3.18%, sys=7.85%, ctx=348, majf=0, minf=1 00:31:49.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:49.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:49.771 issued rwts: total=4200,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.771 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:49.771 00:31:49.771 Run status group 0 (all jobs): 00:31:49.771 READ: bw=61.4MiB/s (64.3MB/s), 13.9MiB/s-17.0MiB/s (14.6MB/s-17.8MB/s), io=64.1MiB (67.2MB), run=1001-1045msec 00:31:49.771 WRITE: bw=63.4MiB/s (66.4MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.7MB/s), io=66.2MiB (69.4MB), run=1001-1045msec 00:31:49.771 00:31:49.771 Disk stats (read/write): 00:31:49.771 nvme0n1: ios=4127/4189, merge=0/0, ticks=16725/16230, in_queue=32955, util=89.78% 00:31:49.771 nvme0n2: ios=3121/3584, merge=0/0, ticks=15307/18426, in_queue=33733, util=93.70% 00:31:49.771 nvme0n3: ios=3129/3230, merge=0/0, ticks=15983/18572, in_queue=34555, util=93.53% 00:31:49.771 nvme0n4: ios=3612/3584, merge=0/0, ticks=36802/32945, in_queue=69747, util=96.63% 00:31:49.771 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:49.771 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=376669 00:31:49.771 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:49.771 19:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:49.771 [global] 00:31:49.771 thread=1 00:31:49.771 invalidate=1 00:31:49.771 rw=read 00:31:49.771 time_based=1 00:31:49.771 runtime=10 00:31:49.771 ioengine=libaio 00:31:49.771 direct=1 00:31:49.771 bs=4096 00:31:49.771 iodepth=1 00:31:49.771 norandommap=1 00:31:49.771 numjobs=1 00:31:49.771 00:31:49.771 [job0] 00:31:49.771 filename=/dev/nvme0n1 00:31:49.771 [job1] 00:31:49.771 filename=/dev/nvme0n2 00:31:49.771 [job2] 00:31:49.771 filename=/dev/nvme0n3 00:31:49.771 [job3] 00:31:49.771 filename=/dev/nvme0n4 00:31:49.771 Could not set queue depth (nvme0n1) 00:31:49.771 Could not set queue depth (nvme0n2) 00:31:49.771 Could not set queue depth (nvme0n3) 00:31:49.771 Could not set queue depth (nvme0n4) 00:31:49.771 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:49.771 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:49.771 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:49.771 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:49.771 fio-3.35 00:31:49.771 Starting 4 threads 00:31:53.057 19:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:53.057 19:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:53.057 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=39469056, buflen=4096 00:31:53.057 fio: pid=376771, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:53.315 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:53.315 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:53.315 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=323584, buflen=4096 00:31:53.315 fio: pid=376770, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:53.573 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:53.573 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:53.573 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=540672, buflen=4096 00:31:53.573 fio: pid=376768, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:53.831 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=56627200, buflen=4096 00:31:53.831 fio: pid=376769, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:53.831 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:53.831 19:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:53.831 00:31:53.831 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=376768: Fri Dec 6 19:30:38 2024 00:31:53.831 read: IOPS=37, BW=149KiB/s (153kB/s)(528KiB/3534msec) 00:31:53.831 slat (usec): min=5, max=12904, avg=162.85, stdev=1262.20 00:31:53.831 clat (usec): min=249, max=43089, avg=26527.61, stdev=19575.70 00:31:53.831 lat (usec): min=255, max=55993, avg=26691.57, stdev=19743.59 00:31:53.831 clat percentiles (usec): 00:31:53.831 | 1.00th=[ 255], 5.00th=[ 262], 10.00th=[ 273], 20.00th=[ 289], 00:31:53.831 | 30.00th=[ 314], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:31:53.831 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:53.831 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:31:53.831 | 99.99th=[43254] 00:31:53.831 bw ( KiB/s): min= 96, max= 104, per=0.39%, avg=97.33, stdev= 3.27, samples=6 00:31:53.831 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:31:53.831 lat (usec) : 250=0.75%, 500=34.59% 00:31:53.831 lat (msec) : 50=63.91% 00:31:53.831 cpu : usr=0.11%, sys=0.00%, ctx=136, majf=0, minf=2 00:31:53.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:53.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.831 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.831 issued rwts: total=133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:53.831 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=376769: Fri Dec 6 19:30:38 2024 00:31:53.831 read: IOPS=3643, BW=14.2MiB/s (14.9MB/s)(54.0MiB/3795msec) 00:31:53.831 slat (usec): min=4, max=16563, avg=13.63, stdev=229.94 00:31:53.831 clat (usec): min=191, max=40987, avg=257.71, stdev=504.89 00:31:53.831 lat (usec): min=198, max=41021, avg=271.35, stdev=555.98 00:31:53.831 clat percentiles (usec): 00:31:53.831 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:31:53.831 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 245], 00:31:53.831 | 70.00th=[ 260], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 347], 00:31:53.831 | 99.00th=[ 379], 99.50th=[ 396], 99.90th=[ 898], 99.95th=[ 1237], 00:31:53.831 | 99.99th=[40633] 00:31:53.831 bw ( KiB/s): min=11952, max=17264, per=58.56%, avg=14611.43, stdev=1713.51, samples=7 00:31:53.831 iops : min= 2988, max= 4316, avg=3652.86, stdev=428.38, samples=7 00:31:53.831 lat (usec) : 250=64.14%, 500=35.59%, 750=0.09%, 1000=0.10% 00:31:53.831 lat (msec) : 2=0.05%, 10=0.01%, 20=0.01%, 50=0.01% 00:31:53.831 cpu : usr=1.37%, sys=5.11%, ctx=13832, majf=0, minf=2 00:31:53.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:53.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.831 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.831 issued rwts: total=13826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:53.832 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=376770: Fri Dec 6 19:30:38 2024 00:31:53.832 read: IOPS=24, BW=97.7KiB/s (100kB/s)(316KiB/3234msec) 00:31:53.832 slat (usec): min=8, max=9878, avg=141.05, stdev=1102.53 00:31:53.832 clat (usec): min=408, max=42007, avg=40496.00, stdev=4571.23 00:31:53.832 lat (usec): min=431, max=50949, avg=40638.65, stdev=4718.80 00:31:53.832 clat percentiles (usec): 00:31:53.832 | 1.00th=[ 408], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:53.832 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:53.832 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:53.832 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:53.832 | 99.99th=[42206] 00:31:53.832 bw ( KiB/s): min= 96, max= 104, per=0.39%, avg=98.67, stdev= 4.13, samples=6 00:31:53.832 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:31:53.832 lat (usec) : 500=1.25% 00:31:53.832 lat (msec) : 50=97.50% 00:31:53.832 cpu : usr=0.06%, sys=0.00%, ctx=82, majf=0, minf=1 00:31:53.832 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:53.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.832 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.832 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.832 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:53.832 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=376771: Fri Dec 6 19:30:38 2024 00:31:53.832 read: IOPS=3285, BW=12.8MiB/s (13.5MB/s)(37.6MiB/2933msec) 00:31:53.832 slat (nsec): min=5588, max=62529, avg=10633.96, stdev=4863.94 00:31:53.832 clat (usec): min=208, max=41009, avg=288.21, stdev=589.22 00:31:53.832 lat (usec): min=216, max=41017, avg=298.84, stdev=589.59 00:31:53.832 clat percentiles (usec): 00:31:53.832 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 241], 00:31:53.832 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 260], 00:31:53.832 | 70.00th=[ 273], 80.00th=[ 306], 90.00th=[ 392], 95.00th=[ 429], 00:31:53.832 | 99.00th=[ 494], 99.50th=[ 529], 99.90th=[ 578], 99.95th=[ 611], 00:31:53.832 | 99.99th=[41157] 00:31:53.832 bw ( KiB/s): min=10040, max=15384, per=51.81%, avg=12926.40, stdev=2010.93, samples=5 00:31:53.832 iops : min= 2510, max= 3846, avg=3231.60, stdev=502.73, samples=5 00:31:53.832 lat (usec) : 250=43.78%, 500=55.35%, 750=0.83% 00:31:53.832 lat (msec) : 2=0.01%, 50=0.02% 00:31:53.832 cpu : usr=1.64%, sys=5.76%, ctx=9639, majf=0, minf=1 00:31:53.832 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:53.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.832 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.832 issued rwts: total=9637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.832 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:53.832 00:31:53.832 Run status group 0 (all jobs): 00:31:53.832 READ: bw=24.4MiB/s (25.5MB/s), 97.7KiB/s-14.2MiB/s (100kB/s-14.9MB/s), io=92.5MiB (97.0MB), run=2933-3795msec 00:31:53.832 00:31:53.832 Disk stats (read/write): 00:31:53.832 nvme0n1: ios=165/0, merge=0/0, ticks=4381/0, in_queue=4381, util=98.80% 00:31:53.832 nvme0n2: ios=13181/0, merge=0/0, ticks=3511/0, in_queue=3511, util=98.82% 00:31:53.832 nvme0n3: ios=118/0, merge=0/0, ticks=3886/0, in_queue=3886, util=99.19% 00:31:53.832 nvme0n4: ios=9508/0, merge=0/0, ticks=3476/0, in_queue=3476, util=99.32% 00:31:54.090 19:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:54.090 19:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:54.349 19:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:54.349 19:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:54.607 19:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:54.607 19:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:54.865 19:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:54.865 19:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:55.124 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:55.124 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 376669 00:31:55.124 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:55.124 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:55.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:55.381 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:55.381 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:55.381 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:55.381 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:55.381 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:55.381 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:55.381 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:55.381 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:55.381 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:55.381 nvmf hotplug test: fio failed as expected 00:31:55.381 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:55.639 rmmod nvme_tcp 00:31:55.639 rmmod nvme_fabrics 00:31:55.639 rmmod nvme_keyring 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 374646 ']' 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 374646 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 374646 ']' 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 374646 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:55.639 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 374646 00:31:55.897 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:55.897 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:55.897 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 374646' 00:31:55.898 killing process with pid 374646 00:31:55.898 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 374646 00:31:55.898 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 374646 00:31:55.898 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:55.898 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:55.898 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:55.898 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:55.898 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:55.898 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:55.898 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:55.898 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:55.898 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:55.898 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.898 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.898 19:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.443 19:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:58.443 00:31:58.443 real 0m24.034s 00:31:58.443 user 1m8.282s 00:31:58.443 sys 0m10.307s 00:31:58.443 19:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:58.443 19:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:58.443 ************************************ 00:31:58.443 END TEST nvmf_fio_target 00:31:58.443 ************************************ 00:31:58.443 19:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:58.443 19:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:58.443 ************************************ 00:31:58.443 START TEST nvmf_bdevio 00:31:58.443 ************************************ 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:58.443 * Looking for test storage... 00:31:58.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:58.443 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:58.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.444 --rc genhtml_branch_coverage=1 00:31:58.444 --rc genhtml_function_coverage=1 00:31:58.444 --rc genhtml_legend=1 00:31:58.444 --rc geninfo_all_blocks=1 00:31:58.444 --rc geninfo_unexecuted_blocks=1 00:31:58.444 00:31:58.444 ' 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:58.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.444 --rc genhtml_branch_coverage=1 00:31:58.444 --rc genhtml_function_coverage=1 00:31:58.444 --rc genhtml_legend=1 00:31:58.444 --rc geninfo_all_blocks=1 00:31:58.444 --rc geninfo_unexecuted_blocks=1 00:31:58.444 00:31:58.444 ' 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:58.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.444 --rc genhtml_branch_coverage=1 00:31:58.444 --rc genhtml_function_coverage=1 00:31:58.444 --rc genhtml_legend=1 00:31:58.444 --rc geninfo_all_blocks=1 00:31:58.444 --rc geninfo_unexecuted_blocks=1 00:31:58.444 00:31:58.444 ' 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:58.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.444 --rc genhtml_branch_coverage=1 00:31:58.444 --rc genhtml_function_coverage=1 00:31:58.444 --rc genhtml_legend=1 00:31:58.444 --rc geninfo_all_blocks=1 00:31:58.444 --rc geninfo_unexecuted_blocks=1 00:31:58.444 00:31:58.444 ' 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:58.444 19:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:00.406 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:00.406 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:00.406 Found net devices under 0000:84:00.0: cvl_0_0 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:00.406 Found net devices under 0000:84:00.1: cvl_0_1 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:00.406 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.663 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.663 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.663 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.663 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:00.663 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.663 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.663 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.663 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:00.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:32:00.664 00:32:00.664 --- 10.0.0.2 ping statistics --- 00:32:00.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.664 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:32:00.664 00:32:00.664 --- 10.0.0.1 ping statistics --- 00:32:00.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.664 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=379411 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 379411 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 379411 ']' 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.664 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.664 [2024-12-06 19:30:45.644261] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:00.664 [2024-12-06 19:30:45.645311] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:32:00.664 [2024-12-06 19:30:45.645361] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.921 [2024-12-06 19:30:45.717478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:00.921 [2024-12-06 19:30:45.775197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.921 [2024-12-06 19:30:45.775253] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.921 [2024-12-06 19:30:45.775283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.921 [2024-12-06 19:30:45.775302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.921 [2024-12-06 19:30:45.775312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.921 [2024-12-06 19:30:45.777033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:00.921 [2024-12-06 19:30:45.777112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:00.921 [2024-12-06 19:30:45.777170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:00.921 [2024-12-06 19:30:45.777174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:00.921 [2024-12-06 19:30:45.869253] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:00.921 [2024-12-06 19:30:45.869511] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:00.921 [2024-12-06 19:30:45.869806] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:00.921 [2024-12-06 19:30:45.870428] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:00.921 [2024-12-06 19:30:45.870637] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:00.921 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.921 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:00.921 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:00.921 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.921 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.921 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.921 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:00.921 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.921 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.921 [2024-12-06 19:30:45.921957] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.921 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.921 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:00.921 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.921 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.921 Malloc0 00:32:00.921 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.921 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:00.921 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.921 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:01.180 [2024-12-06 19:30:45.990094] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:01.180 { 00:32:01.180 "params": { 00:32:01.180 "name": "Nvme$subsystem", 00:32:01.180 "trtype": "$TEST_TRANSPORT", 00:32:01.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:01.180 "adrfam": "ipv4", 00:32:01.180 "trsvcid": "$NVMF_PORT", 00:32:01.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:01.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:01.180 "hdgst": ${hdgst:-false}, 00:32:01.180 "ddgst": ${ddgst:-false} 00:32:01.180 }, 00:32:01.180 "method": "bdev_nvme_attach_controller" 00:32:01.180 } 00:32:01.180 EOF 00:32:01.180 )") 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:01.180 19:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:01.180 "params": { 00:32:01.180 "name": "Nvme1", 00:32:01.180 "trtype": "tcp", 00:32:01.180 "traddr": "10.0.0.2", 00:32:01.180 "adrfam": "ipv4", 00:32:01.180 "trsvcid": "4420", 00:32:01.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:01.180 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:01.180 "hdgst": false, 00:32:01.180 "ddgst": false 00:32:01.180 }, 00:32:01.180 "method": "bdev_nvme_attach_controller" 00:32:01.180 }' 00:32:01.180 [2024-12-06 19:30:46.036618] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:32:01.180 [2024-12-06 19:30:46.036690] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid379554 ] 00:32:01.180 [2024-12-06 19:30:46.106310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:01.180 [2024-12-06 19:30:46.169176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.180 [2024-12-06 19:30:46.169239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:01.180 [2024-12-06 19:30:46.169242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.438 I/O targets: 00:32:01.438 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:01.438 00:32:01.438 00:32:01.438 CUnit - A unit testing framework for C - Version 2.1-3 00:32:01.438 http://cunit.sourceforge.net/ 00:32:01.438 00:32:01.438 00:32:01.438 Suite: bdevio tests on: Nvme1n1 00:32:01.439 Test: blockdev write read block ...passed 00:32:01.439 Test: blockdev write zeroes read block ...passed 00:32:01.697 Test: blockdev write zeroes read no split ...passed 00:32:01.697 Test: blockdev write zeroes read split ...passed 00:32:01.697 Test: blockdev write zeroes read split partial ...passed 00:32:01.697 Test: blockdev reset ...[2024-12-06 19:30:46.544717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:01.697 [2024-12-06 19:30:46.544861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1932a70 (9): Bad file descriptor 00:32:01.697 [2024-12-06 19:30:46.637397] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:01.697 passed 00:32:01.697 Test: blockdev write read 8 blocks ...passed 00:32:01.697 Test: blockdev write read size > 128k ...passed 00:32:01.697 Test: blockdev write read invalid size ...passed 00:32:01.697 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:01.697 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:01.697 Test: blockdev write read max offset ...passed 00:32:02.009 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:02.009 Test: blockdev writev readv 8 blocks ...passed 00:32:02.009 Test: blockdev writev readv 30 x 1block ...passed 00:32:02.009 Test: blockdev writev readv block ...passed 00:32:02.009 Test: blockdev writev readv size > 128k ...passed 00:32:02.009 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:02.009 Test: blockdev comparev and writev ...[2024-12-06 19:30:46.896229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:02.009 [2024-12-06 19:30:46.896279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.009 [2024-12-06 19:30:46.896305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:02.009 [2024-12-06 19:30:46.896323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.009 [2024-12-06 19:30:46.896878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:02.009 [2024-12-06 19:30:46.896903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:02.009 [2024-12-06 19:30:46.896925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:02.009 [2024-12-06 19:30:46.896941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:02.009 [2024-12-06 19:30:46.897480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:02.009 [2024-12-06 19:30:46.897504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:02.009 [2024-12-06 19:30:46.897526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:02.009 [2024-12-06 19:30:46.897542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:02.009 [2024-12-06 19:30:46.898083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:02.009 [2024-12-06 19:30:46.898107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:02.009 [2024-12-06 19:30:46.898129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:02.009 [2024-12-06 19:30:46.898146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:02.009 passed 00:32:02.009 Test: blockdev nvme passthru rw ...passed 00:32:02.009 Test: blockdev nvme passthru vendor specific ...[2024-12-06 19:30:46.979997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:02.009 [2024-12-06 19:30:46.980025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:02.009 [2024-12-06 19:30:46.980179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:02.009 [2024-12-06 19:30:46.980203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:02.009 [2024-12-06 19:30:46.980365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:02.009 [2024-12-06 19:30:46.980388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:02.009 [2024-12-06 19:30:46.980539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:02.009 [2024-12-06 19:30:46.980562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:02.009 passed 00:32:02.009 Test: blockdev nvme admin passthru ...passed 00:32:02.009 Test: blockdev copy ...passed 00:32:02.009 00:32:02.009 Run Summary: Type Total Ran Passed Failed Inactive 00:32:02.009 suites 1 1 n/a 0 0 00:32:02.009 tests 23 23 23 0 0 00:32:02.009 asserts 152 152 152 0 n/a 00:32:02.009 00:32:02.009 Elapsed time = 1.298 seconds 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:02.266 rmmod nvme_tcp 00:32:02.266 rmmod nvme_fabrics 00:32:02.266 rmmod nvme_keyring 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 379411 ']' 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 379411 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 379411 ']' 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 379411 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:02.266 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 379411 00:32:02.523 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:02.523 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:02.523 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 379411' 00:32:02.523 killing process with pid 379411 00:32:02.523 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 379411 00:32:02.523 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 379411 00:32:02.523 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:02.523 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:02.523 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:02.523 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:02.523 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:02.523 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:02.523 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:02.523 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:02.523 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:02.523 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.523 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.523 19:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.057 19:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:05.057 00:32:05.057 real 0m6.577s 00:32:05.057 user 0m8.729s 00:32:05.057 sys 0m2.697s 00:32:05.058 19:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:05.058 19:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:05.058 ************************************ 00:32:05.058 END TEST nvmf_bdevio 00:32:05.058 ************************************ 00:32:05.058 19:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:05.058 00:32:05.058 real 3m54.761s 00:32:05.058 user 8m52.083s 00:32:05.058 sys 1m26.490s 00:32:05.058 19:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:05.058 19:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:05.058 ************************************ 00:32:05.058 END TEST nvmf_target_core_interrupt_mode 00:32:05.058 ************************************ 00:32:05.058 19:30:49 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:05.058 19:30:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:05.058 19:30:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:05.058 19:30:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:05.058 ************************************ 00:32:05.058 START TEST nvmf_interrupt 00:32:05.058 ************************************ 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:05.058 * Looking for test storage... 00:32:05.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:05.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.058 --rc genhtml_branch_coverage=1 00:32:05.058 --rc genhtml_function_coverage=1 00:32:05.058 --rc genhtml_legend=1 00:32:05.058 --rc geninfo_all_blocks=1 00:32:05.058 --rc geninfo_unexecuted_blocks=1 00:32:05.058 00:32:05.058 ' 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:05.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.058 --rc genhtml_branch_coverage=1 00:32:05.058 --rc genhtml_function_coverage=1 00:32:05.058 --rc genhtml_legend=1 00:32:05.058 --rc geninfo_all_blocks=1 00:32:05.058 --rc geninfo_unexecuted_blocks=1 00:32:05.058 00:32:05.058 ' 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:05.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.058 --rc genhtml_branch_coverage=1 00:32:05.058 --rc genhtml_function_coverage=1 00:32:05.058 --rc genhtml_legend=1 00:32:05.058 --rc geninfo_all_blocks=1 00:32:05.058 --rc geninfo_unexecuted_blocks=1 00:32:05.058 00:32:05.058 ' 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:05.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.058 --rc genhtml_branch_coverage=1 00:32:05.058 --rc genhtml_function_coverage=1 00:32:05.058 --rc genhtml_legend=1 00:32:05.058 --rc geninfo_all_blocks=1 00:32:05.058 --rc geninfo_unexecuted_blocks=1 00:32:05.058 00:32:05.058 ' 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:05.058 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:05.059 19:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.964 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.964 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:06.964 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:06.964 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:06.964 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:06.964 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:06.965 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:06.965 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:06.965 Found net devices under 0000:84:00.0: cvl_0_0 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:06.965 Found net devices under 0000:84:00.1: cvl_0_1 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.965 19:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.223 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.223 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.223 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:07.223 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.223 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.223 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.223 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:07.223 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:07.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:32:07.223 00:32:07.223 --- 10.0.0.2 ping statistics --- 00:32:07.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.224 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:32:07.224 00:32:07.224 --- 10.0.0.1 ping statistics --- 00:32:07.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.224 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=381658 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 381658 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 381658 ']' 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:07.224 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.224 [2024-12-06 19:30:52.168602] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:07.224 [2024-12-06 19:30:52.169648] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:32:07.224 [2024-12-06 19:30:52.169698] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.224 [2024-12-06 19:30:52.243719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:07.485 [2024-12-06 19:30:52.300379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:07.485 [2024-12-06 19:30:52.300451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:07.485 [2024-12-06 19:30:52.300471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:07.485 [2024-12-06 19:30:52.300489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:07.485 [2024-12-06 19:30:52.300504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:07.485 [2024-12-06 19:30:52.302116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.485 [2024-12-06 19:30:52.302123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.485 [2024-12-06 19:30:52.403237] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:07.485 [2024-12-06 19:30:52.403266] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:07.485 [2024-12-06 19:30:52.403521] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:07.485 5000+0 records in 00:32:07.485 5000+0 records out 00:32:07.485 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0146843 s, 697 MB/s 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.485 AIO0 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.485 [2024-12-06 19:30:52.502417] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:07.485 [2024-12-06 19:30:52.526746] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 381658 0 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 381658 0 idle 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=381658 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:07.485 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 381658 -w 256 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 381658 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.28 reactor_0' 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 381658 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.28 reactor_0 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 381658 1 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 381658 1 idle 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=381658 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:07.746 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:07.747 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:07.747 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:07.747 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:07.747 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:07.747 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 381658 -w 256 00:32:07.747 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:08.006 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 381662 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:32:08.006 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 381662 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:32:08.006 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:08.006 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:08.006 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:08.006 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:08.006 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:08.006 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:08.006 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=381824 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 381658 0 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 381658 0 busy 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=381658 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 381658 -w 256 00:32:08.007 19:30:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:08.007 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 381658 root 20 0 128.2g 48768 35328 R 99.9 0.1 0:00.48 reactor_0' 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 381658 root 20 0 128.2g 48768 35328 R 99.9 0.1 0:00.48 reactor_0 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 381658 1 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 381658 1 busy 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=381658 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 381658 -w 256 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 381662 root 20 0 128.2g 48768 35328 R 93.3 0.1 0:00.26 reactor_1' 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 381662 root 20 0 128.2g 48768 35328 R 93.3 0.1 0:00.26 reactor_1 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:08.266 19:30:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 381824 00:32:18.245 Initializing NVMe Controllers 00:32:18.245 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:18.245 Controller IO queue size 256, less than required. 00:32:18.245 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:18.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:18.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:18.245 Initialization complete. Launching workers. 00:32:18.245 ======================================================== 00:32:18.245 Latency(us) 00:32:18.245 Device Information : IOPS MiB/s Average min max 00:32:18.245 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 14566.03 56.90 17585.83 4675.01 21526.35 00:32:18.245 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 14399.83 56.25 17789.74 4681.53 59812.34 00:32:18.245 ======================================================== 00:32:18.245 Total : 28965.86 113.15 17687.20 4675.01 59812.34 00:32:18.245 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 381658 0 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 381658 0 idle 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=381658 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 381658 -w 256 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 381658 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:20.22 reactor_0' 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 381658 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:20.22 reactor_0 00:32:18.245 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 381658 1 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 381658 1 idle 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=381658 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 381658 -w 256 00:32:18.246 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:18.506 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 381662 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.97 reactor_1' 00:32:18.506 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 381662 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.97 reactor_1 00:32:18.506 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:18.506 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:18.506 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:18.506 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:18.506 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:18.506 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:18.506 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:18.506 19:31:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:18.507 19:31:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:18.767 19:31:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:18.767 19:31:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:18.767 19:31:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:18.767 19:31:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:18.767 19:31:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 381658 0 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 381658 0 idle 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=381658 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:20.675 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:20.676 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 381658 -w 256 00:32:20.676 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:20.934 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 381658 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:20.31 reactor_0' 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 381658 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:20.31 reactor_0 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 381658 1 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 381658 1 idle 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=381658 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 381658 -w 256 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 381662 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:10.01 reactor_1' 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 381662 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:10.01 reactor_1 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:20.935 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:21.194 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:21.194 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:21.194 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:21.194 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:21.194 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:21.194 19:31:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:21.194 19:31:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:21.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:21.194 19:31:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:21.194 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:21.194 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:21.194 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:21.194 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:21.194 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:21.194 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:21.194 19:31:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:21.194 19:31:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:21.194 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:21.194 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:21.194 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:21.194 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:21.194 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:21.194 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:21.194 rmmod nvme_tcp 00:32:21.455 rmmod nvme_fabrics 00:32:21.455 rmmod nvme_keyring 00:32:21.455 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:21.455 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:21.455 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:21.455 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 381658 ']' 00:32:21.455 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 381658 00:32:21.455 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 381658 ']' 00:32:21.455 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 381658 00:32:21.455 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:21.455 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:21.455 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 381658 00:32:21.455 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:21.455 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:21.455 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 381658' 00:32:21.455 killing process with pid 381658 00:32:21.455 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 381658 00:32:21.455 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 381658 00:32:21.715 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:21.715 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:21.715 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:21.715 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:21.715 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:21.715 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:21.715 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:21.715 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:21.715 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:21.715 19:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.715 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:21.715 19:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.627 19:31:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:23.627 00:32:23.627 real 0m18.987s 00:32:23.627 user 0m37.012s 00:32:23.627 sys 0m7.044s 00:32:23.627 19:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:23.627 19:31:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:23.627 ************************************ 00:32:23.627 END TEST nvmf_interrupt 00:32:23.627 ************************************ 00:32:23.887 00:32:23.887 real 25m11.575s 00:32:23.887 user 58m42.057s 00:32:23.887 sys 6m57.175s 00:32:23.887 19:31:08 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:23.887 19:31:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.887 ************************************ 00:32:23.887 END TEST nvmf_tcp 00:32:23.887 ************************************ 00:32:23.887 19:31:08 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:23.887 19:31:08 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:23.887 19:31:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:23.887 19:31:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:23.887 19:31:08 -- common/autotest_common.sh@10 -- # set +x 00:32:23.887 ************************************ 00:32:23.887 START TEST spdkcli_nvmf_tcp 00:32:23.887 ************************************ 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:23.887 * Looking for test storage... 00:32:23.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:23.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.887 --rc genhtml_branch_coverage=1 00:32:23.887 --rc genhtml_function_coverage=1 00:32:23.887 --rc genhtml_legend=1 00:32:23.887 --rc geninfo_all_blocks=1 00:32:23.887 --rc geninfo_unexecuted_blocks=1 00:32:23.887 00:32:23.887 ' 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:23.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.887 --rc genhtml_branch_coverage=1 00:32:23.887 --rc genhtml_function_coverage=1 00:32:23.887 --rc genhtml_legend=1 00:32:23.887 --rc geninfo_all_blocks=1 00:32:23.887 --rc geninfo_unexecuted_blocks=1 00:32:23.887 00:32:23.887 ' 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:23.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.887 --rc genhtml_branch_coverage=1 00:32:23.887 --rc genhtml_function_coverage=1 00:32:23.887 --rc genhtml_legend=1 00:32:23.887 --rc geninfo_all_blocks=1 00:32:23.887 --rc geninfo_unexecuted_blocks=1 00:32:23.887 00:32:23.887 ' 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:23.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.887 --rc genhtml_branch_coverage=1 00:32:23.887 --rc genhtml_function_coverage=1 00:32:23.887 --rc genhtml_legend=1 00:32:23.887 --rc geninfo_all_blocks=1 00:32:23.887 --rc geninfo_unexecuted_blocks=1 00:32:23.887 00:32:23.887 ' 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.887 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:23.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=383822 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 383822 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 383822 ']' 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:23.888 19:31:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.888 [2024-12-06 19:31:08.927506] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:32:23.888 [2024-12-06 19:31:08.927594] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383822 ] 00:32:24.147 [2024-12-06 19:31:08.994847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:24.147 [2024-12-06 19:31:09.053198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.147 [2024-12-06 19:31:09.053202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.147 19:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:24.147 19:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:24.147 19:31:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:24.147 19:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:24.147 19:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.147 19:31:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:24.147 19:31:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:24.147 19:31:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:24.147 19:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:24.147 19:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:24.147 19:31:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:24.147 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:24.147 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:24.147 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:24.147 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:24.147 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:24.147 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:24.147 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:24.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:24.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:24.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:24.147 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:24.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:24.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:24.147 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:24.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:24.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:24.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:24.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:24.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:24.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:24.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:24.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:24.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:24.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:24.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:24.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:24.147 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:24.147 ' 00:32:27.445 [2024-12-06 19:31:11.869989] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.395 [2024-12-06 19:31:13.138381] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:30.931 [2024-12-06 19:31:15.481698] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:32.836 [2024-12-06 19:31:17.519870] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:34.214 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:34.214 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:34.214 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:34.214 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:34.214 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:34.214 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:34.214 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:34.214 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:34.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:34.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:34.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:34.214 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:34.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:34.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:34.214 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:34.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:34.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:34.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:34.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:34.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:34.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:34.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:34.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:34.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:34.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:34.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:34.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:34.214 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:34.214 19:31:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:34.214 19:31:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:34.214 19:31:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.214 19:31:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:34.214 19:31:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:34.214 19:31:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.214 19:31:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:34.214 19:31:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:34.783 19:31:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:34.783 19:31:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:34.783 19:31:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:34.783 19:31:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:34.783 19:31:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.783 19:31:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:34.783 19:31:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:34.783 19:31:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.783 19:31:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:34.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:34.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:34.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:34.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:34.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:34.783 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:34.783 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:34.783 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:34.783 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:34.783 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:34.783 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:34.783 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:34.783 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:34.783 ' 00:32:40.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:40.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:40.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:40.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:40.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:40.066 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:40.066 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:40.066 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:40.066 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:40.066 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:40.066 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:40.066 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:40.066 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:40.066 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:40.326 19:31:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:40.326 19:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:40.326 19:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:40.326 19:31:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 383822 00:32:40.326 19:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 383822 ']' 00:32:40.326 19:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 383822 00:32:40.326 19:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:40.326 19:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:40.326 19:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 383822 00:32:40.326 19:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:40.326 19:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:40.326 19:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 383822' 00:32:40.326 killing process with pid 383822 00:32:40.326 19:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 383822 00:32:40.326 19:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 383822 00:32:40.584 19:31:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:40.584 19:31:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:40.584 19:31:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 383822 ']' 00:32:40.584 19:31:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 383822 00:32:40.584 19:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 383822 ']' 00:32:40.584 19:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 383822 00:32:40.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (383822) - No such process 00:32:40.584 19:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 383822 is not found' 00:32:40.584 Process with pid 383822 is not found 00:32:40.584 19:31:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:40.584 19:31:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:40.584 19:31:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:40.584 00:32:40.584 real 0m16.714s 00:32:40.584 user 0m35.630s 00:32:40.584 sys 0m0.865s 00:32:40.584 19:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:40.584 19:31:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:40.584 ************************************ 00:32:40.584 END TEST spdkcli_nvmf_tcp 00:32:40.584 ************************************ 00:32:40.584 19:31:25 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:40.584 19:31:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:40.584 19:31:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:40.584 19:31:25 -- common/autotest_common.sh@10 -- # set +x 00:32:40.584 ************************************ 00:32:40.584 START TEST nvmf_identify_passthru 00:32:40.584 ************************************ 00:32:40.584 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:40.584 * Looking for test storage... 00:32:40.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:40.584 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:40.584 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:32:40.584 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:40.842 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:40.842 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:40.842 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:40.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.842 --rc genhtml_branch_coverage=1 00:32:40.842 --rc genhtml_function_coverage=1 00:32:40.842 --rc genhtml_legend=1 00:32:40.842 --rc geninfo_all_blocks=1 00:32:40.842 --rc geninfo_unexecuted_blocks=1 00:32:40.842 00:32:40.842 ' 00:32:40.842 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:40.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.842 --rc genhtml_branch_coverage=1 00:32:40.842 --rc genhtml_function_coverage=1 00:32:40.842 --rc genhtml_legend=1 00:32:40.842 --rc geninfo_all_blocks=1 00:32:40.842 --rc geninfo_unexecuted_blocks=1 00:32:40.842 00:32:40.842 ' 00:32:40.842 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:40.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.842 --rc genhtml_branch_coverage=1 00:32:40.842 --rc genhtml_function_coverage=1 00:32:40.842 --rc genhtml_legend=1 00:32:40.842 --rc geninfo_all_blocks=1 00:32:40.842 --rc geninfo_unexecuted_blocks=1 00:32:40.842 00:32:40.842 ' 00:32:40.842 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:40.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.842 --rc genhtml_branch_coverage=1 00:32:40.842 --rc genhtml_function_coverage=1 00:32:40.842 --rc genhtml_legend=1 00:32:40.842 --rc geninfo_all_blocks=1 00:32:40.842 --rc geninfo_unexecuted_blocks=1 00:32:40.842 00:32:40.842 ' 00:32:40.842 19:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:40.842 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.842 19:31:25 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.843 19:31:25 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.843 19:31:25 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.843 19:31:25 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.843 19:31:25 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.843 19:31:25 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:40.843 19:31:25 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:40.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:40.843 19:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.843 19:31:25 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:40.843 19:31:25 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.843 19:31:25 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.843 19:31:25 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.843 19:31:25 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.843 19:31:25 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.843 19:31:25 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.843 19:31:25 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:40.843 19:31:25 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.843 19:31:25 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.843 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:40.843 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:40.843 19:31:25 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:40.843 19:31:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:42.750 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:42.750 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:42.750 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:42.751 Found net devices under 0000:84:00.0: cvl_0_0 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:42.751 Found net devices under 0000:84:00.1: cvl_0_1 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:42.751 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:43.011 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:43.011 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:43.011 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:43.011 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:43.011 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:43.011 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:43.011 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:43.011 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:43.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:43.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:32:43.011 00:32:43.011 --- 10.0.0.2 ping statistics --- 00:32:43.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.011 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:32:43.011 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:43.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:43.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:32:43.011 00:32:43.011 --- 10.0.0.1 ping statistics --- 00:32:43.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.011 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:32:43.012 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:43.012 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:43.012 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:43.012 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:43.012 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:43.012 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:43.012 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:43.012 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:43.012 19:31:27 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:43.012 19:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:43.012 19:31:27 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:43.012 19:31:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:43.012 19:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:43.012 19:31:27 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:43.012 19:31:27 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:43.012 19:31:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:43.012 19:31:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:43.012 19:31:27 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:43.012 19:31:27 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:43.012 19:31:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:43.012 19:31:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:43.012 19:31:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:43.012 19:31:27 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:43.012 19:31:27 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:82:00.0 00:32:43.012 19:31:27 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:82:00.0 00:32:43.012 19:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:32:43.012 19:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:32:43.012 19:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:32:43.012 19:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:43.012 19:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:47.210 19:31:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:32:47.210 19:31:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:32:47.210 19:31:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:47.210 19:31:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:51.396 19:31:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:51.396 19:31:36 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:51.396 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:51.396 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.654 19:31:36 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:51.654 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:51.654 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.654 19:31:36 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=388471 00:32:51.654 19:31:36 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:51.654 19:31:36 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:51.654 19:31:36 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 388471 00:32:51.654 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 388471 ']' 00:32:51.654 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.654 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:51.654 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.654 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:51.654 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.654 [2024-12-06 19:31:36.503499] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:32:51.654 [2024-12-06 19:31:36.503598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.654 [2024-12-06 19:31:36.579571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:51.654 [2024-12-06 19:31:36.637457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:51.654 [2024-12-06 19:31:36.637529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:51.654 [2024-12-06 19:31:36.637551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:51.654 [2024-12-06 19:31:36.637562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:51.654 [2024-12-06 19:31:36.637571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:51.654 [2024-12-06 19:31:36.639163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.654 [2024-12-06 19:31:36.639222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:51.654 [2024-12-06 19:31:36.639289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:51.654 [2024-12-06 19:31:36.639293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.912 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:51.912 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:51.912 19:31:36 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:51.912 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.913 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.913 INFO: Log level set to 20 00:32:51.913 INFO: Requests: 00:32:51.913 { 00:32:51.913 "jsonrpc": "2.0", 00:32:51.913 "method": "nvmf_set_config", 00:32:51.913 "id": 1, 00:32:51.913 "params": { 00:32:51.913 "admin_cmd_passthru": { 00:32:51.913 "identify_ctrlr": true 00:32:51.913 } 00:32:51.913 } 00:32:51.913 } 00:32:51.913 00:32:51.913 INFO: response: 00:32:51.913 { 00:32:51.913 "jsonrpc": "2.0", 00:32:51.913 "id": 1, 00:32:51.913 "result": true 00:32:51.913 } 00:32:51.913 00:32:51.913 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.913 19:31:36 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:51.913 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.913 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.913 INFO: Setting log level to 20 00:32:51.913 INFO: Setting log level to 20 00:32:51.913 INFO: Log level set to 20 00:32:51.913 INFO: Log level set to 20 00:32:51.913 INFO: Requests: 00:32:51.913 { 00:32:51.913 "jsonrpc": "2.0", 00:32:51.913 "method": "framework_start_init", 00:32:51.913 "id": 1 00:32:51.913 } 00:32:51.913 00:32:51.913 INFO: Requests: 00:32:51.913 { 00:32:51.913 "jsonrpc": "2.0", 00:32:51.913 "method": "framework_start_init", 00:32:51.913 "id": 1 00:32:51.913 } 00:32:51.913 00:32:51.913 [2024-12-06 19:31:36.843465] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:51.913 INFO: response: 00:32:51.913 { 00:32:51.913 "jsonrpc": "2.0", 00:32:51.913 "id": 1, 00:32:51.913 "result": true 00:32:51.913 } 00:32:51.913 00:32:51.913 INFO: response: 00:32:51.913 { 00:32:51.913 "jsonrpc": "2.0", 00:32:51.913 "id": 1, 00:32:51.913 "result": true 00:32:51.913 } 00:32:51.913 00:32:51.913 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.913 19:31:36 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:51.913 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.913 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.913 INFO: Setting log level to 40 00:32:51.913 INFO: Setting log level to 40 00:32:51.913 INFO: Setting log level to 40 00:32:51.913 [2024-12-06 19:31:36.853410] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.913 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.913 19:31:36 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:51.913 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:51.913 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:51.913 19:31:36 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:32:51.913 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.913 19:31:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.201 Nvme0n1 00:32:55.201 19:31:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.201 19:31:39 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:55.201 19:31:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.201 19:31:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.201 19:31:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.201 19:31:39 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:55.201 19:31:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.201 19:31:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.201 19:31:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.201 19:31:39 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:55.201 19:31:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.201 19:31:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.201 [2024-12-06 19:31:39.761379] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:55.201 19:31:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.201 19:31:39 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:55.201 19:31:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.201 19:31:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.201 [ 00:32:55.201 { 00:32:55.201 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:55.201 "subtype": "Discovery", 00:32:55.201 "listen_addresses": [], 00:32:55.201 "allow_any_host": true, 00:32:55.201 "hosts": [] 00:32:55.201 }, 00:32:55.201 { 00:32:55.201 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:55.201 "subtype": "NVMe", 00:32:55.201 "listen_addresses": [ 00:32:55.201 { 00:32:55.201 "trtype": "TCP", 00:32:55.201 "adrfam": "IPv4", 00:32:55.201 "traddr": "10.0.0.2", 00:32:55.201 "trsvcid": "4420" 00:32:55.201 } 00:32:55.202 ], 00:32:55.202 "allow_any_host": true, 00:32:55.202 "hosts": [], 00:32:55.202 "serial_number": "SPDK00000000000001", 00:32:55.202 "model_number": "SPDK bdev Controller", 00:32:55.202 "max_namespaces": 1, 00:32:55.202 "min_cntlid": 1, 00:32:55.202 "max_cntlid": 65519, 00:32:55.202 "namespaces": [ 00:32:55.202 { 00:32:55.202 "nsid": 1, 00:32:55.202 "bdev_name": "Nvme0n1", 00:32:55.202 "name": "Nvme0n1", 00:32:55.202 "nguid": "842FB1E5493045FF9503EC39275A5F0D", 00:32:55.202 "uuid": "842fb1e5-4930-45ff-9503-ec39275a5f0d" 00:32:55.202 } 00:32:55.202 ] 00:32:55.202 } 00:32:55.202 ] 00:32:55.202 19:31:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.202 19:31:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:55.202 19:31:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:55.202 19:31:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:55.202 19:31:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:32:55.202 19:31:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:55.202 19:31:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:55.202 19:31:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:55.202 19:31:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:55.202 19:31:40 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:32:55.202 19:31:40 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:55.202 19:31:40 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:55.202 19:31:40 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.202 19:31:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.202 19:31:40 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.202 19:31:40 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:55.202 19:31:40 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:55.202 19:31:40 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:55.202 19:31:40 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:55.202 19:31:40 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:55.202 19:31:40 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:55.202 19:31:40 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:55.202 19:31:40 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:55.202 rmmod nvme_tcp 00:32:55.202 rmmod nvme_fabrics 00:32:55.202 rmmod nvme_keyring 00:32:55.202 19:31:40 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:55.202 19:31:40 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:55.202 19:31:40 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:55.202 19:31:40 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 388471 ']' 00:32:55.202 19:31:40 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 388471 00:32:55.202 19:31:40 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 388471 ']' 00:32:55.202 19:31:40 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 388471 00:32:55.202 19:31:40 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:55.202 19:31:40 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:55.202 19:31:40 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 388471 00:32:55.202 19:31:40 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:55.202 19:31:40 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:55.202 19:31:40 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 388471' 00:32:55.202 killing process with pid 388471 00:32:55.202 19:31:40 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 388471 00:32:55.202 19:31:40 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 388471 00:32:57.220 19:31:41 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:57.220 19:31:41 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:57.220 19:31:41 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:57.220 19:31:41 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:57.220 19:31:41 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:57.220 19:31:41 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:57.220 19:31:41 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:57.220 19:31:41 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:57.220 19:31:41 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:57.220 19:31:41 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.220 19:31:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:57.220 19:31:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.131 19:31:43 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:59.131 00:32:59.131 real 0m18.366s 00:32:59.131 user 0m26.494s 00:32:59.131 sys 0m3.235s 00:32:59.131 19:31:43 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:59.131 19:31:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:59.131 ************************************ 00:32:59.131 END TEST nvmf_identify_passthru 00:32:59.131 ************************************ 00:32:59.131 19:31:43 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:59.131 19:31:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:59.131 19:31:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:59.131 19:31:43 -- common/autotest_common.sh@10 -- # set +x 00:32:59.131 ************************************ 00:32:59.131 START TEST nvmf_dif 00:32:59.131 ************************************ 00:32:59.131 19:31:43 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:59.131 * Looking for test storage... 00:32:59.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:59.131 19:31:43 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:59.131 19:31:43 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:32:59.131 19:31:43 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:59.131 19:31:44 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:59.131 19:31:44 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:32:59.131 19:31:44 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:59.131 19:31:44 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:59.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.131 --rc genhtml_branch_coverage=1 00:32:59.131 --rc genhtml_function_coverage=1 00:32:59.131 --rc genhtml_legend=1 00:32:59.131 --rc geninfo_all_blocks=1 00:32:59.131 --rc geninfo_unexecuted_blocks=1 00:32:59.131 00:32:59.131 ' 00:32:59.131 19:31:44 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:59.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.131 --rc genhtml_branch_coverage=1 00:32:59.131 --rc genhtml_function_coverage=1 00:32:59.131 --rc genhtml_legend=1 00:32:59.131 --rc geninfo_all_blocks=1 00:32:59.131 --rc geninfo_unexecuted_blocks=1 00:32:59.131 00:32:59.131 ' 00:32:59.131 19:31:44 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:59.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.131 --rc genhtml_branch_coverage=1 00:32:59.131 --rc genhtml_function_coverage=1 00:32:59.131 --rc genhtml_legend=1 00:32:59.131 --rc geninfo_all_blocks=1 00:32:59.131 --rc geninfo_unexecuted_blocks=1 00:32:59.131 00:32:59.132 ' 00:32:59.132 19:31:44 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:59.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.132 --rc genhtml_branch_coverage=1 00:32:59.132 --rc genhtml_function_coverage=1 00:32:59.132 --rc genhtml_legend=1 00:32:59.132 --rc geninfo_all_blocks=1 00:32:59.132 --rc geninfo_unexecuted_blocks=1 00:32:59.132 00:32:59.132 ' 00:32:59.132 19:31:44 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:59.132 19:31:44 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:32:59.132 19:31:44 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.132 19:31:44 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.132 19:31:44 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.132 19:31:44 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.132 19:31:44 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.132 19:31:44 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.132 19:31:44 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:59.132 19:31:44 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:59.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:59.132 19:31:44 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:59.132 19:31:44 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:59.132 19:31:44 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:59.132 19:31:44 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:59.132 19:31:44 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.132 19:31:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:59.132 19:31:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:59.132 19:31:44 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:32:59.132 19:31:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:01.667 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:01.667 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:01.667 Found net devices under 0000:84:00.0: cvl_0_0 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:01.667 Found net devices under 0000:84:00.1: cvl_0_1 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:01.667 19:31:46 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:01.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:01.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:33:01.668 00:33:01.668 --- 10.0.0.2 ping statistics --- 00:33:01.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.668 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:01.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:01.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:33:01.668 00:33:01.668 --- 10.0.0.1 ping statistics --- 00:33:01.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.668 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:01.668 19:31:46 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:02.603 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:02.603 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:02.603 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:02.603 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:02.603 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:02.603 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:02.603 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:02.603 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:02.603 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:02.603 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:02.603 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:02.603 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:02.603 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:02.603 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:02.603 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:02.603 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:02.603 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:02.603 19:31:47 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:02.603 19:31:47 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:02.603 19:31:47 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:02.603 19:31:47 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:02.603 19:31:47 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:02.603 19:31:47 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:02.603 19:31:47 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:02.603 19:31:47 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:02.603 19:31:47 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:02.603 19:31:47 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:02.603 19:31:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:02.603 19:31:47 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=391680 00:33:02.603 19:31:47 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:02.603 19:31:47 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 391680 00:33:02.603 19:31:47 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 391680 ']' 00:33:02.603 19:31:47 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:02.603 19:31:47 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:02.603 19:31:47 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:02.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:02.603 19:31:47 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:02.603 19:31:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:02.862 [2024-12-06 19:31:47.686077] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:33:02.862 [2024-12-06 19:31:47.686155] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:02.862 [2024-12-06 19:31:47.758475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.862 [2024-12-06 19:31:47.817330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:02.862 [2024-12-06 19:31:47.817389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:02.862 [2024-12-06 19:31:47.817403] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:02.862 [2024-12-06 19:31:47.817414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:02.862 [2024-12-06 19:31:47.817424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:02.862 [2024-12-06 19:31:47.818099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.121 19:31:47 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:03.121 19:31:47 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:03.121 19:31:47 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:03.121 19:31:47 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:03.121 19:31:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:03.121 19:31:47 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:03.121 19:31:47 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:03.121 19:31:47 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:03.121 19:31:47 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.121 19:31:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:03.122 [2024-12-06 19:31:47.992297] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:03.122 19:31:47 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.122 19:31:47 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:03.122 19:31:47 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:03.122 19:31:47 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:03.122 19:31:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:03.122 ************************************ 00:33:03.122 START TEST fio_dif_1_default 00:33:03.122 ************************************ 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:03.122 bdev_null0 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:03.122 [2024-12-06 19:31:48.048584] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:03.122 { 00:33:03.122 "params": { 00:33:03.122 "name": "Nvme$subsystem", 00:33:03.122 "trtype": "$TEST_TRANSPORT", 00:33:03.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:03.122 "adrfam": "ipv4", 00:33:03.122 "trsvcid": "$NVMF_PORT", 00:33:03.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:03.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:03.122 "hdgst": ${hdgst:-false}, 00:33:03.122 "ddgst": ${ddgst:-false} 00:33:03.122 }, 00:33:03.122 "method": "bdev_nvme_attach_controller" 00:33:03.122 } 00:33:03.122 EOF 00:33:03.122 )") 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:03.122 "params": { 00:33:03.122 "name": "Nvme0", 00:33:03.122 "trtype": "tcp", 00:33:03.122 "traddr": "10.0.0.2", 00:33:03.122 "adrfam": "ipv4", 00:33:03.122 "trsvcid": "4420", 00:33:03.122 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:03.122 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:03.122 "hdgst": false, 00:33:03.122 "ddgst": false 00:33:03.122 }, 00:33:03.122 "method": "bdev_nvme_attach_controller" 00:33:03.122 }' 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:03.122 19:31:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:03.382 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:03.382 fio-3.35 00:33:03.382 Starting 1 thread 00:33:15.584 00:33:15.584 filename0: (groupid=0, jobs=1): err= 0: pid=391933: Fri Dec 6 19:31:58 2024 00:33:15.584 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10027msec) 00:33:15.584 slat (nsec): min=7441, max=50208, avg=9370.88, stdev=3208.68 00:33:15.585 clat (usec): min=712, max=43742, avg=41403.64, stdev=2670.90 00:33:15.585 lat (usec): min=721, max=43781, avg=41413.01, stdev=2670.90 00:33:15.585 clat percentiles (usec): 00:33:15.585 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:15.585 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:33:15.585 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:15.585 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:33:15.585 | 99.99th=[43779] 00:33:15.585 bw ( KiB/s): min= 352, max= 416, per=99.70%, avg=385.60, stdev=12.61, samples=20 00:33:15.585 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:33:15.585 lat (usec) : 750=0.41% 00:33:15.585 lat (msec) : 50=99.59% 00:33:15.585 cpu : usr=90.60%, sys=9.11%, ctx=17, majf=0, minf=9 00:33:15.585 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:15.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.585 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.585 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:15.585 00:33:15.585 Run status group 0 (all jobs): 00:33:15.585 READ: bw=386KiB/s (395kB/s), 386KiB/s-386KiB/s (395kB/s-395kB/s), io=3872KiB (3965kB), run=10027-10027msec 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.585 00:33:15.585 real 0m11.144s 00:33:15.585 user 0m10.325s 00:33:15.585 sys 0m1.173s 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:15.585 ************************************ 00:33:15.585 END TEST fio_dif_1_default 00:33:15.585 ************************************ 00:33:15.585 19:31:59 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:15.585 19:31:59 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:15.585 19:31:59 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:15.585 19:31:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:15.585 ************************************ 00:33:15.585 START TEST fio_dif_1_multi_subsystems 00:33:15.585 ************************************ 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:15.585 bdev_null0 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:15.585 [2024-12-06 19:31:59.235392] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:15.585 bdev_null1 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.585 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:15.586 { 00:33:15.586 "params": { 00:33:15.586 "name": "Nvme$subsystem", 00:33:15.586 "trtype": "$TEST_TRANSPORT", 00:33:15.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:15.586 "adrfam": "ipv4", 00:33:15.586 "trsvcid": "$NVMF_PORT", 00:33:15.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:15.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:15.586 "hdgst": ${hdgst:-false}, 00:33:15.586 "ddgst": ${ddgst:-false} 00:33:15.586 }, 00:33:15.586 "method": "bdev_nvme_attach_controller" 00:33:15.586 } 00:33:15.586 EOF 00:33:15.586 )") 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:15.586 { 00:33:15.586 "params": { 00:33:15.586 "name": "Nvme$subsystem", 00:33:15.586 "trtype": "$TEST_TRANSPORT", 00:33:15.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:15.586 "adrfam": "ipv4", 00:33:15.586 "trsvcid": "$NVMF_PORT", 00:33:15.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:15.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:15.586 "hdgst": ${hdgst:-false}, 00:33:15.586 "ddgst": ${ddgst:-false} 00:33:15.586 }, 00:33:15.586 "method": "bdev_nvme_attach_controller" 00:33:15.586 } 00:33:15.586 EOF 00:33:15.586 )") 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:15.586 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:15.586 "params": { 00:33:15.586 "name": "Nvme0", 00:33:15.586 "trtype": "tcp", 00:33:15.586 "traddr": "10.0.0.2", 00:33:15.586 "adrfam": "ipv4", 00:33:15.586 "trsvcid": "4420", 00:33:15.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:15.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:15.586 "hdgst": false, 00:33:15.586 "ddgst": false 00:33:15.586 }, 00:33:15.586 "method": "bdev_nvme_attach_controller" 00:33:15.586 },{ 00:33:15.586 "params": { 00:33:15.586 "name": "Nvme1", 00:33:15.586 "trtype": "tcp", 00:33:15.586 "traddr": "10.0.0.2", 00:33:15.586 "adrfam": "ipv4", 00:33:15.586 "trsvcid": "4420", 00:33:15.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:15.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:15.586 "hdgst": false, 00:33:15.586 "ddgst": false 00:33:15.586 }, 00:33:15.586 "method": "bdev_nvme_attach_controller" 00:33:15.586 }' 00:33:15.587 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:15.587 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:15.587 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:15.587 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:15.587 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:15.587 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:15.587 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:15.587 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:15.587 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:15.587 19:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:15.587 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:15.587 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:15.587 fio-3.35 00:33:15.587 Starting 2 threads 00:33:25.565 00:33:25.565 filename0: (groupid=0, jobs=1): err= 0: pid=393393: Fri Dec 6 19:32:10 2024 00:33:25.565 read: IOPS=203, BW=812KiB/s (832kB/s)(8144KiB/10026msec) 00:33:25.565 slat (nsec): min=6262, max=41556, avg=9211.54, stdev=3426.71 00:33:25.565 clat (usec): min=488, max=45860, avg=19669.00, stdev=20320.65 00:33:25.565 lat (usec): min=495, max=45891, avg=19678.21, stdev=20320.58 00:33:25.565 clat percentiles (usec): 00:33:25.565 | 1.00th=[ 515], 5.00th=[ 545], 10.00th=[ 562], 20.00th=[ 586], 00:33:25.565 | 30.00th=[ 611], 40.00th=[ 668], 50.00th=[ 857], 60.00th=[41157], 00:33:25.565 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:33:25.565 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:33:25.565 | 99.99th=[45876] 00:33:25.565 bw ( KiB/s): min= 640, max= 1536, per=49.81%, avg=812.80, stdev=193.79, samples=20 00:33:25.565 iops : min= 160, max= 384, avg=203.20, stdev=48.45, samples=20 00:33:25.565 lat (usec) : 500=0.20%, 750=47.25%, 1000=5.50% 00:33:25.565 lat (msec) : 2=0.29%, 50=46.76% 00:33:25.565 cpu : usr=94.89%, sys=4.83%, ctx=16, majf=0, minf=43 00:33:25.565 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:25.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.565 issued rwts: total=2036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.565 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:25.565 filename1: (groupid=0, jobs=1): err= 0: pid=393394: Fri Dec 6 19:32:10 2024 00:33:25.565 read: IOPS=204, BW=819KiB/s (839kB/s)(8224KiB/10040msec) 00:33:25.565 slat (nsec): min=6963, max=46698, avg=9436.55, stdev=3807.28 00:33:25.565 clat (usec): min=513, max=45905, avg=19503.71, stdev=20326.97 00:33:25.565 lat (usec): min=520, max=45936, avg=19513.15, stdev=20326.76 00:33:25.565 clat percentiles (usec): 00:33:25.565 | 1.00th=[ 523], 5.00th=[ 537], 10.00th=[ 553], 20.00th=[ 586], 00:33:25.565 | 30.00th=[ 611], 40.00th=[ 668], 50.00th=[ 865], 60.00th=[41157], 00:33:25.565 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:33:25.565 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:33:25.565 | 99.99th=[45876] 00:33:25.565 bw ( KiB/s): min= 608, max= 1088, per=50.30%, avg=820.80, stdev=109.03, samples=20 00:33:25.565 iops : min= 152, max= 272, avg=205.20, stdev=27.26, samples=20 00:33:25.565 lat (usec) : 750=47.37%, 1000=5.89% 00:33:25.565 lat (msec) : 2=0.44%, 50=46.30% 00:33:25.565 cpu : usr=94.79%, sys=4.92%, ctx=19, majf=0, minf=59 00:33:25.565 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:25.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.565 issued rwts: total=2056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.565 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:25.565 00:33:25.565 Run status group 0 (all jobs): 00:33:25.565 READ: bw=1630KiB/s (1669kB/s), 812KiB/s-819KiB/s (832kB/s-839kB/s), io=16.0MiB (16.8MB), run=10026-10040msec 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:25.565 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.566 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:25.566 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.566 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:25.566 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.566 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:25.566 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.566 00:33:25.566 real 0m11.280s 00:33:25.566 user 0m20.266s 00:33:25.566 sys 0m1.260s 00:33:25.566 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:25.566 19:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:25.566 ************************************ 00:33:25.566 END TEST fio_dif_1_multi_subsystems 00:33:25.566 ************************************ 00:33:25.566 19:32:10 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:25.566 19:32:10 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:25.566 19:32:10 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:25.566 19:32:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:25.566 ************************************ 00:33:25.566 START TEST fio_dif_rand_params 00:33:25.566 ************************************ 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:25.566 bdev_null0 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:25.566 [2024-12-06 19:32:10.570225] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:25.566 { 00:33:25.566 "params": { 00:33:25.566 "name": "Nvme$subsystem", 00:33:25.566 "trtype": "$TEST_TRANSPORT", 00:33:25.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:25.566 "adrfam": "ipv4", 00:33:25.566 "trsvcid": "$NVMF_PORT", 00:33:25.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:25.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:25.566 "hdgst": ${hdgst:-false}, 00:33:25.566 "ddgst": ${ddgst:-false} 00:33:25.566 }, 00:33:25.566 "method": "bdev_nvme_attach_controller" 00:33:25.566 } 00:33:25.566 EOF 00:33:25.566 )") 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:25.566 "params": { 00:33:25.566 "name": "Nvme0", 00:33:25.566 "trtype": "tcp", 00:33:25.566 "traddr": "10.0.0.2", 00:33:25.566 "adrfam": "ipv4", 00:33:25.566 "trsvcid": "4420", 00:33:25.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:25.566 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:25.566 "hdgst": false, 00:33:25.566 "ddgst": false 00:33:25.566 }, 00:33:25.566 "method": "bdev_nvme_attach_controller" 00:33:25.566 }' 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:25.566 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:25.567 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:25.567 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:25.567 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:25.826 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:25.826 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:25.826 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:25.826 19:32:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:25.826 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:25.826 ... 00:33:25.826 fio-3.35 00:33:25.826 Starting 3 threads 00:33:32.387 00:33:32.387 filename0: (groupid=0, jobs=1): err= 0: pid=394775: Fri Dec 6 19:32:16 2024 00:33:32.387 read: IOPS=252, BW=31.5MiB/s (33.0MB/s)(158MiB/5005msec) 00:33:32.387 slat (nsec): min=3897, max=41362, avg=18113.53, stdev=4493.42 00:33:32.387 clat (usec): min=4675, max=51864, avg=11872.22, stdev=4368.92 00:33:32.387 lat (usec): min=4686, max=51882, avg=11890.33, stdev=4368.61 00:33:32.387 clat percentiles (usec): 00:33:32.387 | 1.00th=[ 5145], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[10028], 00:33:32.387 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[11863], 00:33:32.387 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14091], 95.00th=[15008], 00:33:32.387 | 99.00th=[45351], 99.50th=[46400], 99.90th=[51643], 99.95th=[51643], 00:33:32.387 | 99.99th=[51643] 00:33:32.387 bw ( KiB/s): min=25344, max=35584, per=35.28%, avg=32256.00, stdev=2775.63, samples=10 00:33:32.387 iops : min= 198, max= 278, avg=252.00, stdev=21.68, samples=10 00:33:32.387 lat (msec) : 10=18.46%, 20=80.35%, 50=0.95%, 100=0.24% 00:33:32.387 cpu : usr=94.76%, sys=4.74%, ctx=15, majf=0, minf=45 00:33:32.387 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:32.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.387 issued rwts: total=1262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.387 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:32.387 filename0: (groupid=0, jobs=1): err= 0: pid=394776: Fri Dec 6 19:32:16 2024 00:33:32.387 read: IOPS=236, BW=29.6MiB/s (31.0MB/s)(149MiB/5044msec) 00:33:32.387 slat (nsec): min=4940, max=45586, avg=17348.71, stdev=4297.71 00:33:32.387 clat (usec): min=4507, max=52274, avg=12626.54, stdev=5437.83 00:33:32.387 lat (usec): min=4515, max=52289, avg=12643.89, stdev=5437.83 00:33:32.387 clat percentiles (usec): 00:33:32.387 | 1.00th=[ 4686], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10683], 00:33:32.387 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11994], 60.00th=[12387], 00:33:32.387 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14353], 95.00th=[15139], 00:33:32.387 | 99.00th=[50070], 99.50th=[51119], 99.90th=[51643], 99.95th=[52167], 00:33:32.387 | 99.99th=[52167] 00:33:32.387 bw ( KiB/s): min=24832, max=33536, per=33.35%, avg=30489.60, stdev=2547.02, samples=10 00:33:32.387 iops : min= 194, max= 262, avg=238.20, stdev=19.90, samples=10 00:33:32.387 lat (msec) : 10=10.31%, 20=87.76%, 50=0.84%, 100=1.09% 00:33:32.387 cpu : usr=96.41%, sys=3.07%, ctx=10, majf=0, minf=66 00:33:32.387 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:32.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.387 issued rwts: total=1193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.387 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:32.387 filename0: (groupid=0, jobs=1): err= 0: pid=394777: Fri Dec 6 19:32:16 2024 00:33:32.387 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(144MiB/5003msec) 00:33:32.387 slat (nsec): min=4340, max=62906, avg=17602.92, stdev=3906.01 00:33:32.387 clat (usec): min=4413, max=89543, avg=13051.23, stdev=5750.70 00:33:32.387 lat (usec): min=4426, max=89557, avg=13068.84, stdev=5750.44 00:33:32.387 clat percentiles (usec): 00:33:32.387 | 1.00th=[ 5080], 5.00th=[ 8848], 10.00th=[10290], 20.00th=[11207], 00:33:32.387 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12649], 60.00th=[13173], 00:33:32.387 | 70.00th=[13698], 80.00th=[14353], 90.00th=[15270], 95.00th=[15795], 00:33:32.387 | 99.00th=[18482], 99.50th=[53740], 99.90th=[88605], 99.95th=[89654], 00:33:32.387 | 99.99th=[89654] 00:33:32.387 bw ( KiB/s): min=24320, max=31232, per=32.09%, avg=29337.60, stdev=2128.89, samples=10 00:33:32.387 iops : min= 190, max= 244, avg=229.20, stdev=16.63, samples=10 00:33:32.387 lat (msec) : 10=8.54%, 20=90.51%, 50=0.44%, 100=0.52% 00:33:32.387 cpu : usr=95.52%, sys=3.98%, ctx=12, majf=0, minf=76 00:33:32.387 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:32.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:32.387 issued rwts: total=1148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:32.387 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:32.387 00:33:32.387 Run status group 0 (all jobs): 00:33:32.387 READ: bw=89.3MiB/s (93.6MB/s), 28.7MiB/s-31.5MiB/s (30.1MB/s-33.0MB/s), io=450MiB (472MB), run=5003-5044msec 00:33:32.387 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:32.387 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:32.387 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:32.387 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:32.387 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:32.387 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:32.387 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.387 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.387 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.388 bdev_null0 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.388 [2024-12-06 19:32:16.879757] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.388 bdev_null1 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.388 bdev_null2 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:32.388 { 00:33:32.388 "params": { 00:33:32.388 "name": "Nvme$subsystem", 00:33:32.388 "trtype": "$TEST_TRANSPORT", 00:33:32.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:32.388 "adrfam": "ipv4", 00:33:32.388 "trsvcid": "$NVMF_PORT", 00:33:32.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:32.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:32.388 "hdgst": ${hdgst:-false}, 00:33:32.388 "ddgst": ${ddgst:-false} 00:33:32.388 }, 00:33:32.388 "method": "bdev_nvme_attach_controller" 00:33:32.388 } 00:33:32.388 EOF 00:33:32.388 )") 00:33:32.388 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:32.389 { 00:33:32.389 "params": { 00:33:32.389 "name": "Nvme$subsystem", 00:33:32.389 "trtype": "$TEST_TRANSPORT", 00:33:32.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:32.389 "adrfam": "ipv4", 00:33:32.389 "trsvcid": "$NVMF_PORT", 00:33:32.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:32.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:32.389 "hdgst": ${hdgst:-false}, 00:33:32.389 "ddgst": ${ddgst:-false} 00:33:32.389 }, 00:33:32.389 "method": "bdev_nvme_attach_controller" 00:33:32.389 } 00:33:32.389 EOF 00:33:32.389 )") 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:32.389 { 00:33:32.389 "params": { 00:33:32.389 "name": "Nvme$subsystem", 00:33:32.389 "trtype": "$TEST_TRANSPORT", 00:33:32.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:32.389 "adrfam": "ipv4", 00:33:32.389 "trsvcid": "$NVMF_PORT", 00:33:32.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:32.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:32.389 "hdgst": ${hdgst:-false}, 00:33:32.389 "ddgst": ${ddgst:-false} 00:33:32.389 }, 00:33:32.389 "method": "bdev_nvme_attach_controller" 00:33:32.389 } 00:33:32.389 EOF 00:33:32.389 )") 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:32.389 "params": { 00:33:32.389 "name": "Nvme0", 00:33:32.389 "trtype": "tcp", 00:33:32.389 "traddr": "10.0.0.2", 00:33:32.389 "adrfam": "ipv4", 00:33:32.389 "trsvcid": "4420", 00:33:32.389 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:32.389 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:32.389 "hdgst": false, 00:33:32.389 "ddgst": false 00:33:32.389 }, 00:33:32.389 "method": "bdev_nvme_attach_controller" 00:33:32.389 },{ 00:33:32.389 "params": { 00:33:32.389 "name": "Nvme1", 00:33:32.389 "trtype": "tcp", 00:33:32.389 "traddr": "10.0.0.2", 00:33:32.389 "adrfam": "ipv4", 00:33:32.389 "trsvcid": "4420", 00:33:32.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:32.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:32.389 "hdgst": false, 00:33:32.389 "ddgst": false 00:33:32.389 }, 00:33:32.389 "method": "bdev_nvme_attach_controller" 00:33:32.389 },{ 00:33:32.389 "params": { 00:33:32.389 "name": "Nvme2", 00:33:32.389 "trtype": "tcp", 00:33:32.389 "traddr": "10.0.0.2", 00:33:32.389 "adrfam": "ipv4", 00:33:32.389 "trsvcid": "4420", 00:33:32.389 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:32.389 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:32.389 "hdgst": false, 00:33:32.389 "ddgst": false 00:33:32.389 }, 00:33:32.389 "method": "bdev_nvme_attach_controller" 00:33:32.389 }' 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:32.389 19:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:32.389 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:32.389 ... 00:33:32.389 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:32.389 ... 00:33:32.389 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:32.389 ... 00:33:32.389 fio-3.35 00:33:32.389 Starting 24 threads 00:33:44.616 00:33:44.616 filename0: (groupid=0, jobs=1): err= 0: pid=395660: Fri Dec 6 19:32:28 2024 00:33:44.616 read: IOPS=57, BW=229KiB/s (235kB/s)(2304KiB/10049msec) 00:33:44.616 slat (usec): min=8, max=103, avg=47.39, stdev=24.86 00:33:44.616 clat (msec): min=129, max=549, avg=278.73, stdev=85.00 00:33:44.616 lat (msec): min=129, max=549, avg=278.77, stdev=84.98 00:33:44.616 clat percentiles (msec): 00:33:44.616 | 1.00th=[ 131], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:33:44.616 | 30.00th=[ 203], 40.00th=[ 236], 50.00th=[ 275], 60.00th=[ 300], 00:33:44.616 | 70.00th=[ 321], 80.00th=[ 363], 90.00th=[ 393], 95.00th=[ 439], 00:33:44.616 | 99.00th=[ 542], 99.50th=[ 550], 99.90th=[ 550], 99.95th=[ 550], 00:33:44.616 | 99.99th=[ 550] 00:33:44.616 bw ( KiB/s): min= 128, max= 384, per=3.66%, avg=224.00, stdev=80.59, samples=20 00:33:44.616 iops : min= 32, max= 96, avg=56.00, stdev=20.15, samples=20 00:33:44.617 lat (msec) : 250=42.36%, 500=56.60%, 750=1.04% 00:33:44.617 cpu : usr=98.28%, sys=1.18%, ctx=193, majf=0, minf=25 00:33:44.617 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:33:44.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.617 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.617 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.617 filename0: (groupid=0, jobs=1): err= 0: pid=395661: Fri Dec 6 19:32:28 2024 00:33:44.617 read: IOPS=66, BW=264KiB/s (270kB/s)(2672KiB/10119msec) 00:33:44.617 slat (usec): min=7, max=111, avg=43.29, stdev=28.84 00:33:44.617 clat (msec): min=100, max=403, avg=241.08, stdev=57.29 00:33:44.617 lat (msec): min=100, max=403, avg=241.12, stdev=57.29 00:33:44.617 clat percentiles (msec): 00:33:44.617 | 1.00th=[ 101], 5.00th=[ 159], 10.00th=[ 190], 20.00th=[ 190], 00:33:44.617 | 30.00th=[ 192], 40.00th=[ 220], 50.00th=[ 245], 60.00th=[ 268], 00:33:44.617 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 305], 95.00th=[ 330], 00:33:44.617 | 99.00th=[ 376], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:33:44.617 | 99.99th=[ 405] 00:33:44.617 bw ( KiB/s): min= 128, max= 384, per=4.26%, avg=260.80, stdev=74.16, samples=20 00:33:44.617 iops : min= 32, max= 96, avg=65.20, stdev=18.54, samples=20 00:33:44.617 lat (msec) : 250=51.50%, 500=48.50% 00:33:44.617 cpu : usr=98.31%, sys=1.26%, ctx=16, majf=0, minf=39 00:33:44.617 IO depths : 1=2.2%, 2=7.9%, 4=23.2%, 8=56.3%, 16=10.3%, 32=0.0%, >=64=0.0% 00:33:44.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.617 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.617 issued rwts: total=668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.617 filename0: (groupid=0, jobs=1): err= 0: pid=395662: Fri Dec 6 19:32:28 2024 00:33:44.617 read: IOPS=63, BW=254KiB/s (260kB/s)(2568KiB/10111msec) 00:33:44.617 slat (usec): min=6, max=105, avg=48.03, stdev=29.78 00:33:44.617 clat (msec): min=127, max=466, avg=250.03, stdev=54.93 00:33:44.617 lat (msec): min=127, max=466, avg=250.07, stdev=54.92 00:33:44.617 clat percentiles (msec): 00:33:44.617 | 1.00th=[ 130], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:33:44.617 | 30.00th=[ 209], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 262], 00:33:44.617 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 313], 95.00th=[ 334], 00:33:44.617 | 99.00th=[ 405], 99.50th=[ 414], 99.90th=[ 468], 99.95th=[ 468], 00:33:44.617 | 99.99th=[ 468] 00:33:44.617 bw ( KiB/s): min= 128, max= 384, per=4.10%, avg=250.40, stdev=74.37, samples=20 00:33:44.617 iops : min= 32, max= 96, avg=62.60, stdev=18.59, samples=20 00:33:44.617 lat (msec) : 250=45.48%, 500=54.52% 00:33:44.617 cpu : usr=98.25%, sys=1.32%, ctx=14, majf=0, minf=26 00:33:44.617 IO depths : 1=2.3%, 2=6.7%, 4=19.2%, 8=61.5%, 16=10.3%, 32=0.0%, >=64=0.0% 00:33:44.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.617 complete : 0=0.0%, 4=92.4%, 8=2.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.617 issued rwts: total=642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.617 filename0: (groupid=0, jobs=1): err= 0: pid=395663: Fri Dec 6 19:32:28 2024 00:33:44.617 read: IOPS=62, BW=252KiB/s (258kB/s)(2544KiB/10106msec) 00:33:44.617 slat (usec): min=5, max=119, avg=41.02, stdev=27.20 00:33:44.617 clat (msec): min=147, max=545, avg=253.41, stdev=59.91 00:33:44.617 lat (msec): min=147, max=545, avg=253.46, stdev=59.92 00:33:44.617 clat percentiles (msec): 00:33:44.617 | 1.00th=[ 148], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:33:44.617 | 30.00th=[ 197], 40.00th=[ 239], 50.00th=[ 262], 60.00th=[ 275], 00:33:44.617 | 70.00th=[ 279], 80.00th=[ 300], 90.00th=[ 330], 95.00th=[ 372], 00:33:44.617 | 99.00th=[ 388], 99.50th=[ 418], 99.90th=[ 542], 99.95th=[ 542], 00:33:44.617 | 99.99th=[ 542] 00:33:44.617 bw ( KiB/s): min= 128, max= 384, per=4.07%, avg=248.00, stdev=84.90, samples=20 00:33:44.617 iops : min= 32, max= 96, avg=62.00, stdev=21.23, samples=20 00:33:44.617 lat (msec) : 250=48.74%, 500=50.94%, 750=0.31% 00:33:44.617 cpu : usr=98.15%, sys=1.38%, ctx=32, majf=0, minf=36 00:33:44.617 IO depths : 1=4.4%, 2=9.9%, 4=22.6%, 8=54.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:33:44.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.617 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.617 issued rwts: total=636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.617 filename0: (groupid=0, jobs=1): err= 0: pid=395664: Fri Dec 6 19:32:28 2024 00:33:44.617 read: IOPS=62, BW=250KiB/s (256kB/s)(2520KiB/10099msec) 00:33:44.617 slat (usec): min=8, max=110, avg=46.96, stdev=26.97 00:33:44.617 clat (msec): min=127, max=515, avg=255.19, stdev=59.33 00:33:44.617 lat (msec): min=127, max=515, avg=255.24, stdev=59.33 00:33:44.617 clat percentiles (msec): 00:33:44.617 | 1.00th=[ 130], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:33:44.617 | 30.00th=[ 213], 40.00th=[ 241], 50.00th=[ 253], 60.00th=[ 268], 00:33:44.617 | 70.00th=[ 288], 80.00th=[ 300], 90.00th=[ 326], 95.00th=[ 355], 00:33:44.617 | 99.00th=[ 409], 99.50th=[ 430], 99.90th=[ 514], 99.95th=[ 514], 00:33:44.617 | 99.99th=[ 514] 00:33:44.617 bw ( KiB/s): min= 128, max= 384, per=4.07%, avg=248.00, stdev=60.20, samples=20 00:33:44.617 iops : min= 32, max= 96, avg=62.00, stdev=15.05, samples=20 00:33:44.617 lat (msec) : 250=46.98%, 500=52.70%, 750=0.32% 00:33:44.617 cpu : usr=98.21%, sys=1.33%, ctx=18, majf=0, minf=37 00:33:44.617 IO depths : 1=2.4%, 2=7.0%, 4=20.0%, 8=60.5%, 16=10.2%, 32=0.0%, >=64=0.0% 00:33:44.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.617 complete : 0=0.0%, 4=92.7%, 8=1.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.617 issued rwts: total=630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.617 filename0: (groupid=0, jobs=1): err= 0: pid=395665: Fri Dec 6 19:32:28 2024 00:33:44.617 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10119msec) 00:33:44.617 slat (usec): min=7, max=112, avg=63.72, stdev=21.23 00:33:44.617 clat (msec): min=125, max=517, avg=271.82, stdev=88.61 00:33:44.617 lat (msec): min=125, max=517, avg=271.89, stdev=88.62 00:33:44.617 clat percentiles (msec): 00:33:44.617 | 1.00th=[ 127], 5.00th=[ 132], 10.00th=[ 190], 20.00th=[ 190], 00:33:44.617 | 30.00th=[ 192], 40.00th=[ 215], 50.00th=[ 262], 60.00th=[ 300], 00:33:44.617 | 70.00th=[ 313], 80.00th=[ 372], 90.00th=[ 393], 95.00th=[ 435], 00:33:44.617 | 99.00th=[ 439], 99.50th=[ 485], 99.90th=[ 518], 99.95th=[ 518], 00:33:44.617 | 99.99th=[ 518] 00:33:44.617 bw ( KiB/s): min= 128, max= 384, per=3.77%, avg=230.40, stdev=97.31, samples=20 00:33:44.617 iops : min= 32, max= 96, avg=57.60, stdev=24.33, samples=20 00:33:44.617 lat (msec) : 250=48.99%, 500=50.68%, 750=0.34% 00:33:44.617 cpu : usr=98.14%, sys=1.40%, ctx=13, majf=0, minf=29 00:33:44.617 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:33:44.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.617 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.617 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.617 filename0: (groupid=0, jobs=1): err= 0: pid=395666: Fri Dec 6 19:32:28 2024 00:33:44.617 read: IOPS=75, BW=304KiB/s (311kB/s)(3072KiB/10121msec) 00:33:44.617 slat (nsec): min=7955, max=90403, avg=24250.46, stdev=19750.24 00:33:44.617 clat (msec): min=123, max=286, avg=210.61, stdev=41.26 00:33:44.617 lat (msec): min=123, max=286, avg=210.64, stdev=41.25 00:33:44.617 clat percentiles (msec): 00:33:44.617 | 1.00th=[ 127], 5.00th=[ 131], 10.00th=[ 163], 20.00th=[ 176], 00:33:44.617 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 203], 60.00th=[ 222], 00:33:44.617 | 70.00th=[ 234], 80.00th=[ 253], 90.00th=[ 275], 95.00th=[ 275], 00:33:44.617 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 288], 99.95th=[ 288], 00:33:44.617 | 99.99th=[ 288] 00:33:44.617 bw ( KiB/s): min= 144, max= 496, per=4.92%, avg=300.80, stdev=77.45, samples=20 00:33:44.617 iops : min= 36, max= 124, avg=75.20, stdev=19.36, samples=20 00:33:44.617 lat (msec) : 250=79.17%, 500=20.83% 00:33:44.617 cpu : usr=98.44%, sys=1.14%, ctx=21, majf=0, minf=26 00:33:44.617 IO depths : 1=1.2%, 2=7.4%, 4=25.0%, 8=55.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:33:44.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.617 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.617 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.617 filename0: (groupid=0, jobs=1): err= 0: pid=395667: Fri Dec 6 19:32:28 2024 00:33:44.617 read: IOPS=59, BW=240KiB/s (246kB/s)(2424KiB/10102msec) 00:33:44.617 slat (nsec): min=8196, max=83482, avg=24202.72, stdev=12731.61 00:33:44.617 clat (msec): min=155, max=491, avg=266.23, stdev=69.93 00:33:44.617 lat (msec): min=155, max=492, avg=266.25, stdev=69.92 00:33:44.617 clat percentiles (msec): 00:33:44.617 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 190], 20.00th=[ 192], 00:33:44.617 | 30.00th=[ 207], 40.00th=[ 232], 50.00th=[ 271], 60.00th=[ 279], 00:33:44.617 | 70.00th=[ 300], 80.00th=[ 330], 90.00th=[ 380], 95.00th=[ 393], 00:33:44.617 | 99.00th=[ 401], 99.50th=[ 405], 99.90th=[ 493], 99.95th=[ 493], 00:33:44.617 | 99.99th=[ 493] 00:33:44.617 bw ( KiB/s): min= 128, max= 384, per=3.87%, avg=236.00, stdev=75.02, samples=20 00:33:44.617 iops : min= 32, max= 96, avg=59.00, stdev=18.76, samples=20 00:33:44.617 lat (msec) : 250=44.88%, 500=55.12% 00:33:44.617 cpu : usr=98.53%, sys=1.03%, ctx=20, majf=0, minf=26 00:33:44.617 IO depths : 1=5.4%, 2=11.7%, 4=25.1%, 8=50.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:33:44.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.617 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.617 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.617 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.617 filename1: (groupid=0, jobs=1): err= 0: pid=395668: Fri Dec 6 19:32:28 2024 00:33:44.617 read: IOPS=64, BW=259KiB/s (266kB/s)(2624KiB/10113msec) 00:33:44.617 slat (nsec): min=6277, max=81816, avg=25672.14, stdev=12555.88 00:33:44.618 clat (msec): min=118, max=539, avg=245.58, stdev=56.50 00:33:44.618 lat (msec): min=118, max=539, avg=245.61, stdev=56.50 00:33:44.618 clat percentiles (msec): 00:33:44.618 | 1.00th=[ 128], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 192], 00:33:44.618 | 30.00th=[ 199], 40.00th=[ 224], 50.00th=[ 243], 60.00th=[ 264], 00:33:44.618 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 338], 95.00th=[ 347], 00:33:44.618 | 99.00th=[ 388], 99.50th=[ 388], 99.90th=[ 542], 99.95th=[ 542], 00:33:44.618 | 99.99th=[ 542] 00:33:44.618 bw ( KiB/s): min= 128, max= 384, per=4.18%, avg=256.00, stdev=68.28, samples=20 00:33:44.618 iops : min= 32, max= 96, avg=64.00, stdev=17.07, samples=20 00:33:44.618 lat (msec) : 250=54.57%, 500=45.12%, 750=0.30% 00:33:44.618 cpu : usr=98.27%, sys=1.26%, ctx=33, majf=0, minf=31 00:33:44.618 IO depths : 1=3.8%, 2=9.3%, 4=22.7%, 8=55.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:33:44.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.618 complete : 0=0.0%, 4=93.4%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.618 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.618 filename1: (groupid=0, jobs=1): err= 0: pid=395669: Fri Dec 6 19:32:28 2024 00:33:44.618 read: IOPS=76, BW=307KiB/s (314kB/s)(3104KiB/10121msec) 00:33:44.618 slat (nsec): min=7872, max=76191, avg=21409.17, stdev=18415.76 00:33:44.618 clat (msec): min=115, max=412, avg=208.39, stdev=50.24 00:33:44.618 lat (msec): min=116, max=412, avg=208.41, stdev=50.24 00:33:44.618 clat percentiles (msec): 00:33:44.618 | 1.00th=[ 129], 5.00th=[ 132], 10.00th=[ 148], 20.00th=[ 167], 00:33:44.618 | 30.00th=[ 190], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 203], 00:33:44.618 | 70.00th=[ 234], 80.00th=[ 255], 90.00th=[ 271], 95.00th=[ 279], 00:33:44.618 | 99.00th=[ 384], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:33:44.618 | 99.99th=[ 414] 00:33:44.618 bw ( KiB/s): min= 176, max= 432, per=4.98%, avg=304.00, stdev=69.45, samples=20 00:33:44.618 iops : min= 44, max= 108, avg=76.00, stdev=17.36, samples=20 00:33:44.618 lat (msec) : 250=76.55%, 500=23.45% 00:33:44.618 cpu : usr=97.92%, sys=1.56%, ctx=32, majf=0, minf=37 00:33:44.618 IO depths : 1=1.0%, 2=2.8%, 4=11.5%, 8=73.1%, 16=11.6%, 32=0.0%, >=64=0.0% 00:33:44.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.618 complete : 0=0.0%, 4=90.2%, 8=4.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.618 issued rwts: total=776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.618 filename1: (groupid=0, jobs=1): err= 0: pid=395670: Fri Dec 6 19:32:28 2024 00:33:44.618 read: IOPS=57, BW=228KiB/s (234kB/s)(2304KiB/10095msec) 00:33:44.618 slat (usec): min=7, max=112, avg=50.06, stdev=26.71 00:33:44.618 clat (msec): min=187, max=522, avg=280.00, stdev=78.77 00:33:44.618 lat (msec): min=187, max=522, avg=280.05, stdev=78.75 00:33:44.618 clat percentiles (msec): 00:33:44.618 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 190], 20.00th=[ 192], 00:33:44.618 | 30.00th=[ 220], 40.00th=[ 243], 50.00th=[ 279], 60.00th=[ 296], 00:33:44.618 | 70.00th=[ 326], 80.00th=[ 347], 90.00th=[ 405], 95.00th=[ 418], 00:33:44.618 | 99.00th=[ 422], 99.50th=[ 493], 99.90th=[ 523], 99.95th=[ 523], 00:33:44.618 | 99.99th=[ 523] 00:33:44.618 bw ( KiB/s): min= 128, max= 384, per=3.67%, avg=224.00, stdev=89.61, samples=20 00:33:44.618 iops : min= 32, max= 96, avg=56.00, stdev=22.40, samples=20 00:33:44.618 lat (msec) : 250=42.01%, 500=57.64%, 750=0.35% 00:33:44.618 cpu : usr=98.28%, sys=1.31%, ctx=10, majf=0, minf=37 00:33:44.618 IO depths : 1=2.4%, 2=8.7%, 4=25.0%, 8=53.8%, 16=10.1%, 32=0.0%, >=64=0.0% 00:33:44.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.618 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.618 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.618 filename1: (groupid=0, jobs=1): err= 0: pid=395671: Fri Dec 6 19:32:28 2024 00:33:44.618 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10104msec) 00:33:44.618 slat (nsec): min=5109, max=66470, avg=26556.27, stdev=10703.17 00:33:44.618 clat (msec): min=127, max=502, avg=258.15, stdev=67.42 00:33:44.618 lat (msec): min=127, max=502, avg=258.17, stdev=67.42 00:33:44.618 clat percentiles (msec): 00:33:44.618 | 1.00th=[ 128], 5.00th=[ 176], 10.00th=[ 190], 20.00th=[ 192], 00:33:44.618 | 30.00th=[ 209], 40.00th=[ 236], 50.00th=[ 264], 60.00th=[ 279], 00:33:44.618 | 70.00th=[ 284], 80.00th=[ 300], 90.00th=[ 359], 95.00th=[ 388], 00:33:44.618 | 99.00th=[ 401], 99.50th=[ 502], 99.90th=[ 502], 99.95th=[ 502], 00:33:44.618 | 99.99th=[ 502] 00:33:44.618 bw ( KiB/s): min= 128, max= 384, per=3.98%, avg=243.20, stdev=77.28, samples=20 00:33:44.618 iops : min= 32, max= 96, avg=60.80, stdev=19.32, samples=20 00:33:44.618 lat (msec) : 250=46.79%, 500=52.88%, 750=0.32% 00:33:44.618 cpu : usr=98.06%, sys=1.45%, ctx=44, majf=0, minf=25 00:33:44.618 IO depths : 1=3.2%, 2=9.3%, 4=24.5%, 8=53.7%, 16=9.3%, 32=0.0%, >=64=0.0% 00:33:44.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.618 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.618 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.618 filename1: (groupid=0, jobs=1): err= 0: pid=395672: Fri Dec 6 19:32:28 2024 00:33:44.618 read: IOPS=63, BW=253KiB/s (259kB/s)(2560KiB/10119msec) 00:33:44.618 slat (nsec): min=10214, max=49675, avg=23552.00, stdev=6936.11 00:33:44.618 clat (msec): min=100, max=508, avg=252.14, stdev=69.91 00:33:44.618 lat (msec): min=100, max=508, avg=252.16, stdev=69.91 00:33:44.618 clat percentiles (msec): 00:33:44.618 | 1.00th=[ 101], 5.00th=[ 159], 10.00th=[ 190], 20.00th=[ 190], 00:33:44.618 | 30.00th=[ 201], 40.00th=[ 222], 50.00th=[ 245], 60.00th=[ 275], 00:33:44.618 | 70.00th=[ 279], 80.00th=[ 300], 90.00th=[ 355], 95.00th=[ 376], 00:33:44.618 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 510], 99.95th=[ 510], 00:33:44.618 | 99.99th=[ 510] 00:33:44.618 bw ( KiB/s): min= 128, max= 384, per=4.08%, avg=249.60, stdev=84.41, samples=20 00:33:44.618 iops : min= 32, max= 96, avg=62.40, stdev=21.10, samples=20 00:33:44.618 lat (msec) : 250=50.00%, 500=49.69%, 750=0.31% 00:33:44.618 cpu : usr=97.47%, sys=1.92%, ctx=10, majf=0, minf=31 00:33:44.618 IO depths : 1=3.0%, 2=9.1%, 4=24.5%, 8=53.9%, 16=9.5%, 32=0.0%, >=64=0.0% 00:33:44.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.618 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.618 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.618 filename1: (groupid=0, jobs=1): err= 0: pid=395673: Fri Dec 6 19:32:28 2024 00:33:44.618 read: IOPS=60, BW=241KiB/s (246kB/s)(2432KiB/10103msec) 00:33:44.618 slat (usec): min=8, max=109, avg=46.75, stdev=26.32 00:33:44.618 clat (msec): min=187, max=533, avg=265.46, stdev=65.83 00:33:44.618 lat (msec): min=187, max=533, avg=265.51, stdev=65.83 00:33:44.618 clat percentiles (msec): 00:33:44.618 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 190], 20.00th=[ 192], 00:33:44.618 | 30.00th=[ 213], 40.00th=[ 236], 50.00th=[ 271], 60.00th=[ 279], 00:33:44.618 | 70.00th=[ 300], 80.00th=[ 321], 90.00th=[ 363], 95.00th=[ 388], 00:33:44.618 | 99.00th=[ 401], 99.50th=[ 422], 99.90th=[ 535], 99.95th=[ 535], 00:33:44.618 | 99.99th=[ 535] 00:33:44.618 bw ( KiB/s): min= 128, max= 384, per=3.87%, avg=236.80, stdev=73.89, samples=20 00:33:44.618 iops : min= 32, max= 96, avg=59.20, stdev=18.47, samples=20 00:33:44.618 lat (msec) : 250=42.76%, 500=56.91%, 750=0.33% 00:33:44.618 cpu : usr=98.26%, sys=1.32%, ctx=16, majf=0, minf=24 00:33:44.618 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:33:44.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.618 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.618 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.618 filename1: (groupid=0, jobs=1): err= 0: pid=395674: Fri Dec 6 19:32:28 2024 00:33:44.618 read: IOPS=67, BW=271KiB/s (277kB/s)(2744KiB/10139msec) 00:33:44.618 slat (usec): min=4, max=102, avg=42.62, stdev=27.13 00:33:44.618 clat (msec): min=46, max=371, avg=235.78, stdev=64.44 00:33:44.618 lat (msec): min=46, max=371, avg=235.83, stdev=64.45 00:33:44.618 clat percentiles (msec): 00:33:44.618 | 1.00th=[ 47], 5.00th=[ 100], 10.00th=[ 190], 20.00th=[ 192], 00:33:44.618 | 30.00th=[ 197], 40.00th=[ 222], 50.00th=[ 243], 60.00th=[ 268], 00:33:44.618 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 305], 95.00th=[ 326], 00:33:44.618 | 99.00th=[ 338], 99.50th=[ 359], 99.90th=[ 372], 99.95th=[ 372], 00:33:44.618 | 99.99th=[ 372] 00:33:44.618 bw ( KiB/s): min= 128, max= 512, per=4.39%, avg=268.00, stdev=90.04, samples=20 00:33:44.618 iops : min= 32, max= 128, avg=67.00, stdev=22.51, samples=20 00:33:44.618 lat (msec) : 50=3.35%, 100=3.64%, 250=46.06%, 500=46.94% 00:33:44.618 cpu : usr=98.48%, sys=1.09%, ctx=12, majf=0, minf=28 00:33:44.618 IO depths : 1=3.4%, 2=9.6%, 4=25.1%, 8=52.9%, 16=9.0%, 32=0.0%, >=64=0.0% 00:33:44.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.618 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.618 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.618 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.618 filename1: (groupid=0, jobs=1): err= 0: pid=395675: Fri Dec 6 19:32:28 2024 00:33:44.618 read: IOPS=57, BW=228KiB/s (234kB/s)(2304KiB/10103msec) 00:33:44.618 slat (usec): min=20, max=101, avg=67.92, stdev=14.90 00:33:44.618 clat (msec): min=127, max=554, avg=280.08, stdev=85.01 00:33:44.618 lat (msec): min=127, max=554, avg=280.15, stdev=85.02 00:33:44.618 clat percentiles (msec): 00:33:44.618 | 1.00th=[ 130], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:33:44.618 | 30.00th=[ 203], 40.00th=[ 236], 50.00th=[ 288], 60.00th=[ 305], 00:33:44.618 | 70.00th=[ 326], 80.00th=[ 368], 90.00th=[ 405], 95.00th=[ 414], 00:33:44.618 | 99.00th=[ 518], 99.50th=[ 542], 99.90th=[ 558], 99.95th=[ 558], 00:33:44.618 | 99.99th=[ 558] 00:33:44.618 bw ( KiB/s): min= 128, max= 384, per=3.66%, avg=224.00, stdev=79.41, samples=20 00:33:44.618 iops : min= 32, max= 96, avg=56.00, stdev=19.85, samples=20 00:33:44.618 lat (msec) : 250=41.32%, 500=57.64%, 750=1.04% 00:33:44.618 cpu : usr=98.18%, sys=1.38%, ctx=8, majf=0, minf=29 00:33:44.618 IO depths : 1=3.5%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:33:44.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.618 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.619 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.619 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.619 filename2: (groupid=0, jobs=1): err= 0: pid=395676: Fri Dec 6 19:32:28 2024 00:33:44.619 read: IOPS=78, BW=315KiB/s (322kB/s)(3192KiB/10137msec) 00:33:44.619 slat (nsec): min=4074, max=94387, avg=26484.71, stdev=21442.28 00:33:44.619 clat (msec): min=45, max=280, avg=202.71, stdev=51.62 00:33:44.619 lat (msec): min=45, max=280, avg=202.74, stdev=51.62 00:33:44.619 clat percentiles (msec): 00:33:44.619 | 1.00th=[ 46], 5.00th=[ 100], 10.00th=[ 155], 20.00th=[ 169], 00:33:44.619 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 197], 60.00th=[ 220], 00:33:44.619 | 70.00th=[ 230], 80.00th=[ 243], 90.00th=[ 275], 95.00th=[ 275], 00:33:44.619 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 279], 00:33:44.619 | 99.99th=[ 279] 00:33:44.619 bw ( KiB/s): min= 240, max= 512, per=5.12%, avg=312.80, stdev=76.90, samples=20 00:33:44.619 iops : min= 60, max= 128, avg=78.20, stdev=19.23, samples=20 00:33:44.619 lat (msec) : 50=2.63%, 100=3.38%, 250=75.44%, 500=18.55% 00:33:44.619 cpu : usr=98.42%, sys=1.17%, ctx=14, majf=0, minf=55 00:33:44.619 IO depths : 1=2.3%, 2=8.5%, 4=25.1%, 8=54.0%, 16=10.2%, 32=0.0%, >=64=0.0% 00:33:44.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.619 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.619 issued rwts: total=798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.619 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.619 filename2: (groupid=0, jobs=1): err= 0: pid=395677: Fri Dec 6 19:32:28 2024 00:33:44.619 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10102msec) 00:33:44.619 slat (nsec): min=8221, max=75486, avg=24722.53, stdev=11841.20 00:33:44.619 clat (msec): min=127, max=414, avg=258.79, stdev=65.13 00:33:44.619 lat (msec): min=127, max=414, avg=258.82, stdev=65.13 00:33:44.619 clat percentiles (msec): 00:33:44.619 | 1.00th=[ 133], 5.00th=[ 190], 10.00th=[ 190], 20.00th=[ 192], 00:33:44.619 | 30.00th=[ 207], 40.00th=[ 222], 50.00th=[ 253], 60.00th=[ 275], 00:33:44.619 | 70.00th=[ 284], 80.00th=[ 305], 90.00th=[ 363], 95.00th=[ 388], 00:33:44.619 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 414], 99.95th=[ 414], 00:33:44.619 | 99.99th=[ 414] 00:33:44.619 bw ( KiB/s): min= 128, max= 384, per=3.98%, avg=243.20, stdev=68.00, samples=20 00:33:44.619 iops : min= 32, max= 96, avg=60.80, stdev=17.00, samples=20 00:33:44.619 lat (msec) : 250=47.44%, 500=52.56% 00:33:44.619 cpu : usr=98.23%, sys=1.32%, ctx=15, majf=0, minf=27 00:33:44.619 IO depths : 1=3.8%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:33:44.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.619 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.619 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.619 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.619 filename2: (groupid=0, jobs=1): err= 0: pid=395678: Fri Dec 6 19:32:28 2024 00:33:44.619 read: IOPS=56, BW=227KiB/s (233kB/s)(2296KiB/10095msec) 00:33:44.619 slat (nsec): min=8122, max=84714, avg=30508.01, stdev=17746.92 00:33:44.619 clat (msec): min=126, max=552, avg=280.98, stdev=86.64 00:33:44.619 lat (msec): min=126, max=552, avg=281.01, stdev=86.63 00:33:44.619 clat percentiles (msec): 00:33:44.619 | 1.00th=[ 128], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 194], 00:33:44.619 | 30.00th=[ 220], 40.00th=[ 224], 50.00th=[ 279], 60.00th=[ 300], 00:33:44.619 | 70.00th=[ 338], 80.00th=[ 359], 90.00th=[ 393], 95.00th=[ 439], 00:33:44.619 | 99.00th=[ 535], 99.50th=[ 550], 99.90th=[ 550], 99.95th=[ 550], 00:33:44.619 | 99.99th=[ 550] 00:33:44.619 bw ( KiB/s): min= 112, max= 384, per=3.66%, avg=223.20, stdev=92.64, samples=20 00:33:44.619 iops : min= 28, max= 96, avg=55.80, stdev=23.16, samples=20 00:33:44.619 lat (msec) : 250=43.21%, 500=55.75%, 750=1.05% 00:33:44.619 cpu : usr=98.57%, sys=0.98%, ctx=31, majf=0, minf=28 00:33:44.619 IO depths : 1=3.3%, 2=9.6%, 4=25.1%, 8=53.0%, 16=9.1%, 32=0.0%, >=64=0.0% 00:33:44.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.619 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.619 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.619 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.619 filename2: (groupid=0, jobs=1): err= 0: pid=395679: Fri Dec 6 19:32:28 2024 00:33:44.619 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10108msec) 00:33:44.619 slat (usec): min=4, max=102, avg=47.26, stdev=25.25 00:33:44.619 clat (msec): min=127, max=511, avg=257.70, stdev=63.46 00:33:44.619 lat (msec): min=127, max=511, avg=257.75, stdev=63.45 00:33:44.619 clat percentiles (msec): 00:33:44.619 | 1.00th=[ 130], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:33:44.619 | 30.00th=[ 209], 40.00th=[ 239], 50.00th=[ 259], 60.00th=[ 275], 00:33:44.619 | 70.00th=[ 279], 80.00th=[ 300], 90.00th=[ 355], 95.00th=[ 388], 00:33:44.619 | 99.00th=[ 405], 99.50th=[ 510], 99.90th=[ 514], 99.95th=[ 514], 00:33:44.619 | 99.99th=[ 514] 00:33:44.619 bw ( KiB/s): min= 128, max= 384, per=3.98%, avg=243.20, stdev=75.87, samples=20 00:33:44.619 iops : min= 32, max= 96, avg=60.80, stdev=18.97, samples=20 00:33:44.619 lat (msec) : 250=45.51%, 500=53.85%, 750=0.64% 00:33:44.619 cpu : usr=98.19%, sys=1.36%, ctx=13, majf=0, minf=34 00:33:44.619 IO depths : 1=2.7%, 2=9.0%, 4=25.0%, 8=53.5%, 16=9.8%, 32=0.0%, >=64=0.0% 00:33:44.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.619 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.619 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.619 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.619 filename2: (groupid=0, jobs=1): err= 0: pid=395680: Fri Dec 6 19:32:28 2024 00:33:44.619 read: IOPS=67, BW=271KiB/s (278kB/s)(2744KiB/10119msec) 00:33:44.619 slat (usec): min=7, max=118, avg=40.73, stdev=28.52 00:33:44.619 clat (msec): min=100, max=422, avg=235.35, stdev=53.82 00:33:44.619 lat (msec): min=100, max=422, avg=235.39, stdev=53.83 00:33:44.619 clat percentiles (msec): 00:33:44.619 | 1.00th=[ 101], 5.00th=[ 128], 10.00th=[ 188], 20.00th=[ 192], 00:33:44.619 | 30.00th=[ 197], 40.00th=[ 222], 50.00th=[ 241], 60.00th=[ 255], 00:33:44.619 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 292], 95.00th=[ 313], 00:33:44.619 | 99.00th=[ 355], 99.50th=[ 388], 99.90th=[ 422], 99.95th=[ 422], 00:33:44.619 | 99.99th=[ 422] 00:33:44.619 bw ( KiB/s): min= 128, max= 384, per=4.39%, avg=268.00, stdev=78.53, samples=20 00:33:44.619 iops : min= 32, max= 96, avg=67.00, stdev=19.63, samples=20 00:33:44.619 lat (msec) : 250=56.56%, 500=43.44% 00:33:44.619 cpu : usr=98.57%, sys=1.03%, ctx=13, majf=0, minf=40 00:33:44.619 IO depths : 1=2.0%, 2=8.3%, 4=25.1%, 8=54.2%, 16=10.3%, 32=0.0%, >=64=0.0% 00:33:44.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.619 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.619 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.619 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.619 filename2: (groupid=0, jobs=1): err= 0: pid=395681: Fri Dec 6 19:32:28 2024 00:33:44.619 read: IOPS=56, BW=227KiB/s (233kB/s)(2296KiB/10095msec) 00:33:44.619 slat (nsec): min=8434, max=78724, avg=31619.49, stdev=17747.03 00:33:44.619 clat (msec): min=118, max=552, avg=280.95, stdev=87.40 00:33:44.619 lat (msec): min=118, max=552, avg=280.98, stdev=87.39 00:33:44.619 clat percentiles (msec): 00:33:44.619 | 1.00th=[ 128], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:33:44.619 | 30.00th=[ 220], 40.00th=[ 224], 50.00th=[ 279], 60.00th=[ 300], 00:33:44.619 | 70.00th=[ 338], 80.00th=[ 359], 90.00th=[ 393], 95.00th=[ 439], 00:33:44.619 | 99.00th=[ 535], 99.50th=[ 550], 99.90th=[ 550], 99.95th=[ 550], 00:33:44.619 | 99.99th=[ 550] 00:33:44.619 bw ( KiB/s): min= 112, max= 384, per=3.66%, avg=223.20, stdev=92.64, samples=20 00:33:44.619 iops : min= 28, max= 96, avg=55.80, stdev=23.16, samples=20 00:33:44.619 lat (msec) : 250=43.21%, 500=55.75%, 750=1.05% 00:33:44.619 cpu : usr=98.32%, sys=1.28%, ctx=15, majf=0, minf=25 00:33:44.619 IO depths : 1=3.1%, 2=9.4%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:33:44.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.619 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.619 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.619 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.619 filename2: (groupid=0, jobs=1): err= 0: pid=395682: Fri Dec 6 19:32:28 2024 00:33:44.619 read: IOPS=66, BW=267KiB/s (273kB/s)(2696KiB/10112msec) 00:33:44.619 slat (usec): min=5, max=112, avg=40.14, stdev=28.48 00:33:44.619 clat (msec): min=118, max=388, avg=239.56, stdev=49.93 00:33:44.619 lat (msec): min=118, max=388, avg=239.60, stdev=49.93 00:33:44.619 clat percentiles (msec): 00:33:44.619 | 1.00th=[ 136], 5.00th=[ 169], 10.00th=[ 190], 20.00th=[ 192], 00:33:44.619 | 30.00th=[ 197], 40.00th=[ 220], 50.00th=[ 239], 60.00th=[ 255], 00:33:44.619 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 300], 95.00th=[ 317], 00:33:44.619 | 99.00th=[ 384], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:33:44.619 | 99.99th=[ 388] 00:33:44.619 bw ( KiB/s): min= 144, max= 384, per=4.31%, avg=263.20, stdev=60.64, samples=20 00:33:44.619 iops : min= 36, max= 96, avg=65.80, stdev=15.16, samples=20 00:33:44.619 lat (msec) : 250=54.90%, 500=45.10% 00:33:44.619 cpu : usr=98.33%, sys=1.24%, ctx=18, majf=0, minf=23 00:33:44.619 IO depths : 1=0.6%, 2=4.7%, 4=18.5%, 8=64.1%, 16=12.0%, 32=0.0%, >=64=0.0% 00:33:44.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.619 complete : 0=0.0%, 4=92.3%, 8=2.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.619 issued rwts: total=674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.619 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.619 filename2: (groupid=0, jobs=1): err= 0: pid=395683: Fri Dec 6 19:32:28 2024 00:33:44.619 read: IOPS=63, BW=256KiB/s (262kB/s)(2584KiB/10107msec) 00:33:44.619 slat (usec): min=8, max=110, avg=45.89, stdev=27.55 00:33:44.619 clat (msec): min=127, max=504, avg=249.04, stdev=54.65 00:33:44.619 lat (msec): min=127, max=504, avg=249.09, stdev=54.64 00:33:44.619 clat percentiles (msec): 00:33:44.619 | 1.00th=[ 129], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:33:44.619 | 30.00th=[ 213], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 262], 00:33:44.619 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 305], 95.00th=[ 359], 00:33:44.619 | 99.00th=[ 405], 99.50th=[ 409], 99.90th=[ 506], 99.95th=[ 506], 00:33:44.619 | 99.99th=[ 506] 00:33:44.619 bw ( KiB/s): min= 128, max= 384, per=4.13%, avg=252.00, stdev=66.45, samples=20 00:33:44.619 iops : min= 32, max= 96, avg=63.00, stdev=16.61, samples=20 00:33:44.620 lat (msec) : 250=50.77%, 500=48.92%, 750=0.31% 00:33:44.620 cpu : usr=98.43%, sys=1.15%, ctx=11, majf=0, minf=26 00:33:44.620 IO depths : 1=2.3%, 2=7.0%, 4=20.1%, 8=60.4%, 16=10.2%, 32=0.0%, >=64=0.0% 00:33:44.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.620 complete : 0=0.0%, 4=92.7%, 8=1.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.620 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.620 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:44.620 00:33:44.620 Run status group 0 (all jobs): 00:33:44.620 READ: bw=6099KiB/s (6246kB/s), 227KiB/s-315KiB/s (233kB/s-322kB/s), io=60.4MiB (63.3MB), run=10049-10139msec 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.620 19:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:44.620 bdev_null0 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:44.620 [2024-12-06 19:32:29.031923] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:44.620 bdev_null1 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:44.620 { 00:33:44.620 "params": { 00:33:44.620 "name": "Nvme$subsystem", 00:33:44.620 "trtype": "$TEST_TRANSPORT", 00:33:44.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:44.620 "adrfam": "ipv4", 00:33:44.620 "trsvcid": "$NVMF_PORT", 00:33:44.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:44.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:44.620 "hdgst": ${hdgst:-false}, 00:33:44.620 "ddgst": ${ddgst:-false} 00:33:44.620 }, 00:33:44.620 "method": "bdev_nvme_attach_controller" 00:33:44.620 } 00:33:44.620 EOF 00:33:44.620 )") 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:44.620 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:44.621 { 00:33:44.621 "params": { 00:33:44.621 "name": "Nvme$subsystem", 00:33:44.621 "trtype": "$TEST_TRANSPORT", 00:33:44.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:44.621 "adrfam": "ipv4", 00:33:44.621 "trsvcid": "$NVMF_PORT", 00:33:44.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:44.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:44.621 "hdgst": ${hdgst:-false}, 00:33:44.621 "ddgst": ${ddgst:-false} 00:33:44.621 }, 00:33:44.621 "method": "bdev_nvme_attach_controller" 00:33:44.621 } 00:33:44.621 EOF 00:33:44.621 )") 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:44.621 "params": { 00:33:44.621 "name": "Nvme0", 00:33:44.621 "trtype": "tcp", 00:33:44.621 "traddr": "10.0.0.2", 00:33:44.621 "adrfam": "ipv4", 00:33:44.621 "trsvcid": "4420", 00:33:44.621 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:44.621 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:44.621 "hdgst": false, 00:33:44.621 "ddgst": false 00:33:44.621 }, 00:33:44.621 "method": "bdev_nvme_attach_controller" 00:33:44.621 },{ 00:33:44.621 "params": { 00:33:44.621 "name": "Nvme1", 00:33:44.621 "trtype": "tcp", 00:33:44.621 "traddr": "10.0.0.2", 00:33:44.621 "adrfam": "ipv4", 00:33:44.621 "trsvcid": "4420", 00:33:44.621 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:44.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:44.621 "hdgst": false, 00:33:44.621 "ddgst": false 00:33:44.621 }, 00:33:44.621 "method": "bdev_nvme_attach_controller" 00:33:44.621 }' 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:44.621 19:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:44.621 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:44.621 ... 00:33:44.621 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:44.621 ... 00:33:44.621 fio-3.35 00:33:44.621 Starting 4 threads 00:33:51.188 00:33:51.188 filename0: (groupid=0, jobs=1): err= 0: pid=397071: Fri Dec 6 19:32:35 2024 00:33:51.188 read: IOPS=1864, BW=14.6MiB/s (15.3MB/s)(72.9MiB/5003msec) 00:33:51.188 slat (nsec): min=4134, max=79189, avg=20381.42, stdev=12399.97 00:33:51.188 clat (usec): min=712, max=7581, avg=4220.64, stdev=419.86 00:33:51.188 lat (usec): min=731, max=7606, avg=4241.02, stdev=420.27 00:33:51.188 clat percentiles (usec): 00:33:51.188 | 1.00th=[ 2868], 5.00th=[ 3556], 10.00th=[ 3851], 20.00th=[ 4047], 00:33:51.188 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4293], 00:33:51.188 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4490], 95.00th=[ 4686], 00:33:51.188 | 99.00th=[ 5669], 99.50th=[ 5932], 99.90th=[ 7046], 99.95th=[ 7242], 00:33:51.188 | 99.99th=[ 7570] 00:33:51.188 bw ( KiB/s): min=14576, max=15840, per=25.20%, avg=14916.50, stdev=357.89, samples=10 00:33:51.188 iops : min= 1822, max= 1980, avg=1864.50, stdev=44.73, samples=10 00:33:51.188 lat (usec) : 750=0.01%, 1000=0.02% 00:33:51.188 lat (msec) : 2=0.28%, 4=15.48%, 10=84.21% 00:33:51.188 cpu : usr=95.12%, sys=4.40%, ctx=11, majf=0, minf=0 00:33:51.188 IO depths : 1=0.8%, 2=13.8%, 4=58.6%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:51.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.188 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.188 issued rwts: total=9329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.188 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:51.188 filename0: (groupid=0, jobs=1): err= 0: pid=397072: Fri Dec 6 19:32:35 2024 00:33:51.188 read: IOPS=1852, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5002msec) 00:33:51.189 slat (nsec): min=8068, max=79598, avg=24906.31, stdev=12463.64 00:33:51.189 clat (usec): min=879, max=7595, avg=4222.22, stdev=505.45 00:33:51.189 lat (usec): min=893, max=7614, avg=4247.13, stdev=506.11 00:33:51.189 clat percentiles (usec): 00:33:51.189 | 1.00th=[ 2409], 5.00th=[ 3589], 10.00th=[ 3884], 20.00th=[ 4080], 00:33:51.189 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:33:51.189 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4752], 00:33:51.189 | 99.00th=[ 6194], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7504], 00:33:51.189 | 99.99th=[ 7570] 00:33:51.189 bw ( KiB/s): min=14512, max=15216, per=25.03%, avg=14815.70, stdev=205.38, samples=10 00:33:51.189 iops : min= 1814, max= 1902, avg=1851.90, stdev=25.66, samples=10 00:33:51.189 lat (usec) : 1000=0.09% 00:33:51.189 lat (msec) : 2=0.57%, 4=13.61%, 10=85.73% 00:33:51.189 cpu : usr=93.04%, sys=5.00%, ctx=145, majf=0, minf=9 00:33:51.189 IO depths : 1=0.5%, 2=19.9%, 4=54.0%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:51.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.189 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.189 issued rwts: total=9266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.189 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:51.189 filename1: (groupid=0, jobs=1): err= 0: pid=397073: Fri Dec 6 19:32:35 2024 00:33:51.189 read: IOPS=1844, BW=14.4MiB/s (15.1MB/s)(72.1MiB/5002msec) 00:33:51.189 slat (nsec): min=7911, max=79444, avg=24058.98, stdev=11815.28 00:33:51.189 clat (usec): min=817, max=7853, avg=4246.55, stdev=534.44 00:33:51.189 lat (usec): min=832, max=7894, avg=4270.61, stdev=534.76 00:33:51.189 clat percentiles (usec): 00:33:51.189 | 1.00th=[ 2474], 5.00th=[ 3654], 10.00th=[ 3916], 20.00th=[ 4080], 00:33:51.189 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:33:51.189 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4490], 95.00th=[ 4817], 00:33:51.189 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 7439], 99.95th=[ 7635], 00:33:51.189 | 99.99th=[ 7832] 00:33:51.189 bw ( KiB/s): min=14544, max=14848, per=24.91%, avg=14745.60, stdev=93.66, samples=10 00:33:51.189 iops : min= 1818, max= 1856, avg=1843.20, stdev=11.71, samples=10 00:33:51.189 lat (usec) : 1000=0.07% 00:33:51.189 lat (msec) : 2=0.63%, 4=12.78%, 10=86.52% 00:33:51.189 cpu : usr=96.12%, sys=3.38%, ctx=7, majf=0, minf=9 00:33:51.189 IO depths : 1=0.2%, 2=19.5%, 4=54.3%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:51.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.189 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.189 issued rwts: total=9224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.189 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:51.189 filename1: (groupid=0, jobs=1): err= 0: pid=397074: Fri Dec 6 19:32:35 2024 00:33:51.189 read: IOPS=1839, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5004msec) 00:33:51.189 slat (nsec): min=4230, max=79294, avg=22708.77, stdev=12561.27 00:33:51.189 clat (usec): min=908, max=7817, avg=4273.90, stdev=463.02 00:33:51.189 lat (usec): min=928, max=7859, avg=4296.61, stdev=462.80 00:33:51.189 clat percentiles (usec): 00:33:51.189 | 1.00th=[ 2868], 5.00th=[ 3752], 10.00th=[ 3916], 20.00th=[ 4080], 00:33:51.189 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4293], 00:33:51.189 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4817], 00:33:51.189 | 99.00th=[ 6194], 99.50th=[ 6587], 99.90th=[ 7242], 99.95th=[ 7439], 00:33:51.189 | 99.99th=[ 7832] 00:33:51.189 bw ( KiB/s): min=14512, max=14944, per=24.86%, avg=14715.20, stdev=141.92, samples=10 00:33:51.189 iops : min= 1814, max= 1868, avg=1839.40, stdev=17.74, samples=10 00:33:51.189 lat (usec) : 1000=0.01% 00:33:51.189 lat (msec) : 2=0.22%, 4=13.32%, 10=86.45% 00:33:51.189 cpu : usr=95.82%, sys=3.68%, ctx=11, majf=0, minf=9 00:33:51.189 IO depths : 1=0.2%, 2=14.4%, 4=58.6%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:51.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.189 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.189 issued rwts: total=9205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.189 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:51.189 00:33:51.189 Run status group 0 (all jobs): 00:33:51.189 READ: bw=57.8MiB/s (60.6MB/s), 14.4MiB/s-14.6MiB/s (15.1MB/s-15.3MB/s), io=289MiB (303MB), run=5002-5004msec 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.189 00:33:51.189 real 0m24.839s 00:33:51.189 user 4m37.085s 00:33:51.189 sys 0m5.443s 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:51.189 19:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:51.189 ************************************ 00:33:51.189 END TEST fio_dif_rand_params 00:33:51.189 ************************************ 00:33:51.189 19:32:35 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:51.189 19:32:35 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:51.189 19:32:35 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:51.189 19:32:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:51.189 ************************************ 00:33:51.189 START TEST fio_dif_digest 00:33:51.189 ************************************ 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:51.189 bdev_null0 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:51.189 [2024-12-06 19:32:35.463547] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:51.189 19:32:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:51.190 { 00:33:51.190 "params": { 00:33:51.190 "name": "Nvme$subsystem", 00:33:51.190 "trtype": "$TEST_TRANSPORT", 00:33:51.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:51.190 "adrfam": "ipv4", 00:33:51.190 "trsvcid": "$NVMF_PORT", 00:33:51.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:51.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:51.190 "hdgst": ${hdgst:-false}, 00:33:51.190 "ddgst": ${ddgst:-false} 00:33:51.190 }, 00:33:51.190 "method": "bdev_nvme_attach_controller" 00:33:51.190 } 00:33:51.190 EOF 00:33:51.190 )") 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:51.190 "params": { 00:33:51.190 "name": "Nvme0", 00:33:51.190 "trtype": "tcp", 00:33:51.190 "traddr": "10.0.0.2", 00:33:51.190 "adrfam": "ipv4", 00:33:51.190 "trsvcid": "4420", 00:33:51.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:51.190 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:51.190 "hdgst": true, 00:33:51.190 "ddgst": true 00:33:51.190 }, 00:33:51.190 "method": "bdev_nvme_attach_controller" 00:33:51.190 }' 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:51.190 19:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:51.190 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:51.190 ... 00:33:51.190 fio-3.35 00:33:51.190 Starting 3 threads 00:34:03.379 00:34:03.379 filename0: (groupid=0, jobs=1): err= 0: pid=397826: Fri Dec 6 19:32:46 2024 00:34:03.379 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(265MiB/10047msec) 00:34:03.379 slat (nsec): min=5653, max=53621, avg=17242.32, stdev=4361.79 00:34:03.379 clat (usec): min=7794, max=56404, avg=14201.61, stdev=1631.10 00:34:03.379 lat (usec): min=7807, max=56419, avg=14218.85, stdev=1630.86 00:34:03.379 clat percentiles (usec): 00:34:03.379 | 1.00th=[11469], 5.00th=[12518], 10.00th=[12911], 20.00th=[13304], 00:34:03.379 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:34:03.379 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15533], 95.00th=[16057], 00:34:03.379 | 99.00th=[17171], 99.50th=[17695], 99.90th=[20317], 99.95th=[46924], 00:34:03.379 | 99.99th=[56361] 00:34:03.379 bw ( KiB/s): min=25344, max=28160, per=32.53%, avg=27059.20, stdev=757.14, samples=20 00:34:03.379 iops : min= 198, max= 220, avg=211.40, stdev= 5.92, samples=20 00:34:03.379 lat (msec) : 10=0.52%, 20=99.34%, 50=0.09%, 100=0.05% 00:34:03.379 cpu : usr=94.43%, sys=5.01%, ctx=14, majf=0, minf=22 00:34:03.379 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.379 issued rwts: total=2116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:03.379 filename0: (groupid=0, jobs=1): err= 0: pid=397827: Fri Dec 6 19:32:46 2024 00:34:03.379 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(292MiB/10046msec) 00:34:03.379 slat (nsec): min=5789, max=49417, avg=17896.71, stdev=4797.57 00:34:03.379 clat (usec): min=9218, max=53862, avg=12856.01, stdev=2183.69 00:34:03.379 lat (usec): min=9233, max=53877, avg=12873.91, stdev=2184.06 00:34:03.379 clat percentiles (usec): 00:34:03.379 | 1.00th=[10290], 5.00th=[10814], 10.00th=[11338], 20.00th=[11731], 00:34:03.379 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12780], 60.00th=[13042], 00:34:03.379 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14353], 95.00th=[14746], 00:34:03.379 | 99.00th=[15795], 99.50th=[16319], 99.90th=[53216], 99.95th=[53740], 00:34:03.379 | 99.99th=[53740] 00:34:03.379 bw ( KiB/s): min=27904, max=32512, per=35.93%, avg=29888.00, stdev=1720.06, samples=20 00:34:03.379 iops : min= 218, max= 254, avg=233.50, stdev=13.44, samples=20 00:34:03.379 lat (msec) : 10=0.51%, 20=99.27%, 100=0.21% 00:34:03.379 cpu : usr=94.54%, sys=4.89%, ctx=20, majf=0, minf=33 00:34:03.379 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.379 issued rwts: total=2337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:03.379 filename0: (groupid=0, jobs=1): err= 0: pid=397828: Fri Dec 6 19:32:46 2024 00:34:03.379 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(260MiB/10046msec) 00:34:03.379 slat (nsec): min=5769, max=53118, avg=17608.15, stdev=4425.09 00:34:03.379 clat (usec): min=8413, max=49093, avg=14468.97, stdev=1528.83 00:34:03.379 lat (usec): min=8427, max=49106, avg=14486.57, stdev=1528.94 00:34:03.379 clat percentiles (usec): 00:34:03.379 | 1.00th=[11731], 5.00th=[12780], 10.00th=[13173], 20.00th=[13566], 00:34:03.379 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:34:03.379 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15795], 95.00th=[16319], 00:34:03.379 | 99.00th=[17171], 99.50th=[17695], 99.90th=[21103], 99.95th=[45876], 00:34:03.379 | 99.99th=[49021] 00:34:03.379 bw ( KiB/s): min=25600, max=27904, per=31.93%, avg=26560.00, stdev=609.64, samples=20 00:34:03.379 iops : min= 200, max= 218, avg=207.50, stdev= 4.76, samples=20 00:34:03.379 lat (msec) : 10=0.39%, 20=99.42%, 50=0.19% 00:34:03.379 cpu : usr=90.05%, sys=7.02%, ctx=340, majf=0, minf=38 00:34:03.379 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.379 issued rwts: total=2077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:03.379 00:34:03.379 Run status group 0 (all jobs): 00:34:03.379 READ: bw=81.2MiB/s (85.2MB/s), 25.8MiB/s-29.1MiB/s (27.1MB/s-30.5MB/s), io=816MiB (856MB), run=10046-10047msec 00:34:03.379 19:32:46 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:03.379 19:32:46 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:03.379 19:32:46 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:03.379 19:32:46 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:03.379 19:32:46 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:03.379 19:32:46 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:03.379 19:32:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.379 19:32:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:03.380 19:32:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.380 19:32:46 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:03.380 19:32:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.380 19:32:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:03.380 19:32:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.380 00:34:03.380 real 0m11.253s 00:34:03.380 user 0m29.194s 00:34:03.380 sys 0m1.980s 00:34:03.380 19:32:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:03.380 19:32:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:03.380 ************************************ 00:34:03.380 END TEST fio_dif_digest 00:34:03.380 ************************************ 00:34:03.380 19:32:46 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:03.380 19:32:46 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:03.380 19:32:46 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:03.380 19:32:46 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:03.380 19:32:46 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:03.380 19:32:46 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:03.380 19:32:46 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:03.380 19:32:46 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:03.380 rmmod nvme_tcp 00:34:03.380 rmmod nvme_fabrics 00:34:03.380 rmmod nvme_keyring 00:34:03.380 19:32:46 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:03.380 19:32:46 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:03.380 19:32:46 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:03.380 19:32:46 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 391680 ']' 00:34:03.380 19:32:46 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 391680 00:34:03.380 19:32:46 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 391680 ']' 00:34:03.380 19:32:46 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 391680 00:34:03.380 19:32:46 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:03.380 19:32:46 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:03.380 19:32:46 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 391680 00:34:03.380 19:32:46 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:03.380 19:32:46 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:03.380 19:32:46 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 391680' 00:34:03.380 killing process with pid 391680 00:34:03.380 19:32:46 nvmf_dif -- common/autotest_common.sh@973 -- # kill 391680 00:34:03.380 19:32:46 nvmf_dif -- common/autotest_common.sh@978 -- # wait 391680 00:34:03.380 19:32:47 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:03.380 19:32:47 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:03.380 Waiting for block devices as requested 00:34:03.380 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:34:03.380 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:03.380 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:03.640 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:03.640 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:03.640 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:03.899 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:03.899 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:03.899 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:03.899 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:04.160 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:04.160 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:04.160 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:04.160 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:04.447 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:04.447 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:04.447 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:04.740 19:32:49 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:04.740 19:32:49 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:04.740 19:32:49 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:04.740 19:32:49 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:04.740 19:32:49 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:04.740 19:32:49 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:04.740 19:32:49 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:04.740 19:32:49 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:04.740 19:32:49 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.740 19:32:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:04.740 19:32:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.672 19:32:51 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:06.672 00:34:06.672 real 1m7.640s 00:34:06.672 user 6m34.349s 00:34:06.672 sys 0m17.037s 00:34:06.672 19:32:51 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:06.672 19:32:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:06.672 ************************************ 00:34:06.672 END TEST nvmf_dif 00:34:06.672 ************************************ 00:34:06.672 19:32:51 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:06.672 19:32:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:06.672 19:32:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:06.672 19:32:51 -- common/autotest_common.sh@10 -- # set +x 00:34:06.672 ************************************ 00:34:06.672 START TEST nvmf_abort_qd_sizes 00:34:06.672 ************************************ 00:34:06.672 19:32:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:06.672 * Looking for test storage... 00:34:06.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:06.672 19:32:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:06.672 19:32:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:34:06.672 19:32:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:06.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.934 --rc genhtml_branch_coverage=1 00:34:06.934 --rc genhtml_function_coverage=1 00:34:06.934 --rc genhtml_legend=1 00:34:06.934 --rc geninfo_all_blocks=1 00:34:06.934 --rc geninfo_unexecuted_blocks=1 00:34:06.934 00:34:06.934 ' 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:06.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.934 --rc genhtml_branch_coverage=1 00:34:06.934 --rc genhtml_function_coverage=1 00:34:06.934 --rc genhtml_legend=1 00:34:06.934 --rc geninfo_all_blocks=1 00:34:06.934 --rc geninfo_unexecuted_blocks=1 00:34:06.934 00:34:06.934 ' 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:06.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.934 --rc genhtml_branch_coverage=1 00:34:06.934 --rc genhtml_function_coverage=1 00:34:06.934 --rc genhtml_legend=1 00:34:06.934 --rc geninfo_all_blocks=1 00:34:06.934 --rc geninfo_unexecuted_blocks=1 00:34:06.934 00:34:06.934 ' 00:34:06.934 19:32:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:06.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.935 --rc genhtml_branch_coverage=1 00:34:06.935 --rc genhtml_function_coverage=1 00:34:06.935 --rc genhtml_legend=1 00:34:06.935 --rc geninfo_all_blocks=1 00:34:06.935 --rc geninfo_unexecuted_blocks=1 00:34:06.935 00:34:06.935 ' 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:06.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:06.935 19:32:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:09.464 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:09.465 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:09.465 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:09.465 Found net devices under 0000:84:00.0: cvl_0_0 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:09.465 Found net devices under 0000:84:00.1: cvl_0_1 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:09.465 19:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:09.465 19:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:09.465 19:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:09.465 19:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:09.465 19:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:09.465 19:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:09.465 19:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:09.465 19:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:09.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:09.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:34:09.465 00:34:09.465 --- 10.0.0.2 ping statistics --- 00:34:09.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.465 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:34:09.465 19:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:09.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:09.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:34:09.465 00:34:09.465 --- 10.0.0.1 ping statistics --- 00:34:09.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.465 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:34:09.465 19:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:09.466 19:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:09.466 19:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:09.466 19:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:10.402 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:10.402 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:10.402 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:10.402 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:10.402 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:10.402 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:10.402 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:10.402 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:10.402 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:10.402 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:10.402 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:10.402 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:10.402 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:10.402 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:10.402 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:10.402 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:11.339 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=402777 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 402777 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 402777 ']' 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.597 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:11.597 [2024-12-06 19:32:56.472867] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:34:11.597 [2024-12-06 19:32:56.472960] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.597 [2024-12-06 19:32:56.543886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:11.597 [2024-12-06 19:32:56.602360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:11.597 [2024-12-06 19:32:56.602412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:11.597 [2024-12-06 19:32:56.602441] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:11.597 [2024-12-06 19:32:56.602453] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:11.597 [2024-12-06 19:32:56.602463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:11.597 [2024-12-06 19:32:56.604058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:11.597 [2024-12-06 19:32:56.604118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:11.597 [2024-12-06 19:32:56.604184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:11.597 [2024-12-06 19:32:56.604187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:82:00.0 ]] 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:82:00.0 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:11.857 19:32:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:11.857 ************************************ 00:34:11.857 START TEST spdk_target_abort 00:34:11.857 ************************************ 00:34:11.857 19:32:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:11.857 19:32:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:11.857 19:32:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:34:11.857 19:32:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.857 19:32:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:15.143 spdk_targetn1 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:15.143 [2024-12-06 19:32:59.636312] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:15.143 [2024-12-06 19:32:59.680658] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:15.143 19:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:18.426 Initializing NVMe Controllers 00:34:18.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:18.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:18.426 Initialization complete. Launching workers. 00:34:18.426 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11530, failed: 0 00:34:18.426 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1201, failed to submit 10329 00:34:18.426 success 699, unsuccessful 502, failed 0 00:34:18.426 19:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:18.426 19:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:21.714 Initializing NVMe Controllers 00:34:21.714 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:21.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:21.714 Initialization complete. Launching workers. 00:34:21.714 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8833, failed: 0 00:34:21.714 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1241, failed to submit 7592 00:34:21.714 success 330, unsuccessful 911, failed 0 00:34:21.714 19:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:21.714 19:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:24.998 Initializing NVMe Controllers 00:34:24.998 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:24.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:24.998 Initialization complete. Launching workers. 00:34:24.998 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31374, failed: 0 00:34:24.998 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2657, failed to submit 28717 00:34:24.998 success 534, unsuccessful 2123, failed 0 00:34:24.998 19:33:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:24.998 19:33:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.998 19:33:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:24.998 19:33:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.998 19:33:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:24.998 19:33:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.998 19:33:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:25.937 19:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.937 19:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 402777 00:34:25.937 19:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 402777 ']' 00:34:25.937 19:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 402777 00:34:25.937 19:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:25.937 19:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:25.937 19:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 402777 00:34:25.937 19:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:25.937 19:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:25.937 19:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 402777' 00:34:25.937 killing process with pid 402777 00:34:25.937 19:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 402777 00:34:25.937 19:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 402777 00:34:26.197 00:34:26.197 real 0m14.295s 00:34:26.197 user 0m53.797s 00:34:26.197 sys 0m3.103s 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:26.197 ************************************ 00:34:26.197 END TEST spdk_target_abort 00:34:26.197 ************************************ 00:34:26.197 19:33:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:26.197 19:33:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:26.197 19:33:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.197 19:33:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:26.197 ************************************ 00:34:26.197 START TEST kernel_target_abort 00:34:26.197 ************************************ 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:26.197 19:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:27.577 Waiting for block devices as requested 00:34:27.577 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:34:27.577 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:27.836 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:27.836 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:27.836 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:28.095 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:28.095 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:28.095 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:28.095 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:28.095 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:28.354 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:28.354 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:28.354 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:28.614 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:28.614 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:28.614 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:28.614 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:28.872 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:28.872 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:28.872 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:28.872 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:28.872 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:28.872 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:28.872 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:28.872 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:28.872 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:28.873 No valid GPT data, bailing 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:28.873 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:34:29.131 00:34:29.131 Discovery Log Number of Records 2, Generation counter 2 00:34:29.131 =====Discovery Log Entry 0====== 00:34:29.131 trtype: tcp 00:34:29.131 adrfam: ipv4 00:34:29.131 subtype: current discovery subsystem 00:34:29.131 treq: not specified, sq flow control disable supported 00:34:29.131 portid: 1 00:34:29.131 trsvcid: 4420 00:34:29.131 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:29.131 traddr: 10.0.0.1 00:34:29.131 eflags: none 00:34:29.131 sectype: none 00:34:29.131 =====Discovery Log Entry 1====== 00:34:29.131 trtype: tcp 00:34:29.131 adrfam: ipv4 00:34:29.131 subtype: nvme subsystem 00:34:29.131 treq: not specified, sq flow control disable supported 00:34:29.131 portid: 1 00:34:29.131 trsvcid: 4420 00:34:29.131 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:29.131 traddr: 10.0.0.1 00:34:29.131 eflags: none 00:34:29.131 sectype: none 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:29.131 19:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:32.407 Initializing NVMe Controllers 00:34:32.407 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:32.407 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:32.407 Initialization complete. Launching workers. 00:34:32.407 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48450, failed: 0 00:34:32.407 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 48450, failed to submit 0 00:34:32.407 success 0, unsuccessful 48450, failed 0 00:34:32.407 19:33:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:32.407 19:33:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:35.685 Initializing NVMe Controllers 00:34:35.685 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:35.685 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:35.685 Initialization complete. Launching workers. 00:34:35.685 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93680, failed: 0 00:34:35.685 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21706, failed to submit 71974 00:34:35.685 success 0, unsuccessful 21706, failed 0 00:34:35.685 19:33:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:35.685 19:33:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:38.965 Initializing NVMe Controllers 00:34:38.965 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:38.965 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:38.965 Initialization complete. Launching workers. 00:34:38.965 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87982, failed: 0 00:34:38.965 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21974, failed to submit 66008 00:34:38.965 success 0, unsuccessful 21974, failed 0 00:34:38.965 19:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:38.965 19:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:38.965 19:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:38.965 19:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:38.965 19:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:38.965 19:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:38.965 19:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:38.965 19:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:38.965 19:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:38.965 19:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:39.546 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:39.546 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:39.546 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:39.546 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:39.546 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:39.546 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:39.546 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:39.546 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:39.546 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:39.546 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:39.546 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:39.546 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:39.546 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:39.546 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:39.546 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:39.546 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:40.484 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:34:40.743 00:34:40.743 real 0m14.457s 00:34:40.743 user 0m6.067s 00:34:40.743 sys 0m3.571s 00:34:40.743 19:33:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:40.743 19:33:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:40.743 ************************************ 00:34:40.743 END TEST kernel_target_abort 00:34:40.743 ************************************ 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:40.743 rmmod nvme_tcp 00:34:40.743 rmmod nvme_fabrics 00:34:40.743 rmmod nvme_keyring 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 402777 ']' 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 402777 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 402777 ']' 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 402777 00:34:40.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (402777) - No such process 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 402777 is not found' 00:34:40.743 Process with pid 402777 is not found 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:40.743 19:33:25 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:42.119 Waiting for block devices as requested 00:34:42.119 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:34:42.119 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:42.378 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:42.378 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:42.378 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:42.636 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:42.636 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:42.636 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:42.636 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:42.636 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:42.895 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:42.895 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:42.895 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:42.895 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:43.153 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:43.153 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:43.153 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:43.412 19:33:28 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:43.412 19:33:28 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:43.412 19:33:28 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:34:43.412 19:33:28 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:34:43.412 19:33:28 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:43.412 19:33:28 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:34:43.412 19:33:28 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:43.412 19:33:28 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:43.412 19:33:28 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.412 19:33:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:43.412 19:33:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.323 19:33:30 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:45.323 00:34:45.323 real 0m38.717s 00:34:45.323 user 1m2.203s 00:34:45.323 sys 0m10.439s 00:34:45.323 19:33:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:45.323 19:33:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:45.323 ************************************ 00:34:45.323 END TEST nvmf_abort_qd_sizes 00:34:45.323 ************************************ 00:34:45.324 19:33:30 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:45.324 19:33:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:45.324 19:33:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:45.324 19:33:30 -- common/autotest_common.sh@10 -- # set +x 00:34:45.583 ************************************ 00:34:45.583 START TEST keyring_file 00:34:45.583 ************************************ 00:34:45.583 19:33:30 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:45.583 * Looking for test storage... 00:34:45.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:45.583 19:33:30 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:45.583 19:33:30 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:34:45.583 19:33:30 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:45.583 19:33:30 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@345 -- # : 1 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@353 -- # local d=1 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@355 -- # echo 1 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@353 -- # local d=2 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@355 -- # echo 2 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:45.583 19:33:30 keyring_file -- scripts/common.sh@368 -- # return 0 00:34:45.583 19:33:30 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:45.583 19:33:30 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:45.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.583 --rc genhtml_branch_coverage=1 00:34:45.583 --rc genhtml_function_coverage=1 00:34:45.583 --rc genhtml_legend=1 00:34:45.583 --rc geninfo_all_blocks=1 00:34:45.583 --rc geninfo_unexecuted_blocks=1 00:34:45.583 00:34:45.583 ' 00:34:45.583 19:33:30 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:45.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.583 --rc genhtml_branch_coverage=1 00:34:45.583 --rc genhtml_function_coverage=1 00:34:45.583 --rc genhtml_legend=1 00:34:45.583 --rc geninfo_all_blocks=1 00:34:45.583 --rc geninfo_unexecuted_blocks=1 00:34:45.583 00:34:45.583 ' 00:34:45.583 19:33:30 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:45.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.583 --rc genhtml_branch_coverage=1 00:34:45.583 --rc genhtml_function_coverage=1 00:34:45.583 --rc genhtml_legend=1 00:34:45.583 --rc geninfo_all_blocks=1 00:34:45.583 --rc geninfo_unexecuted_blocks=1 00:34:45.583 00:34:45.583 ' 00:34:45.583 19:33:30 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:45.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.583 --rc genhtml_branch_coverage=1 00:34:45.583 --rc genhtml_function_coverage=1 00:34:45.583 --rc genhtml_legend=1 00:34:45.583 --rc geninfo_all_blocks=1 00:34:45.583 --rc geninfo_unexecuted_blocks=1 00:34:45.583 00:34:45.583 ' 00:34:45.583 19:33:30 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:45.583 19:33:30 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.583 19:33:30 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:45.583 19:33:30 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.583 19:33:30 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.583 19:33:30 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.583 19:33:30 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.583 19:33:30 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.583 19:33:30 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.583 19:33:30 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.583 19:33:30 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.583 19:33:30 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.583 19:33:30 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.583 19:33:30 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.584 19:33:30 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:34:45.584 19:33:30 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.584 19:33:30 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.584 19:33:30 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.584 19:33:30 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.584 19:33:30 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.584 19:33:30 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.584 19:33:30 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:45.584 19:33:30 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@51 -- # : 0 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:45.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:45.584 19:33:30 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:45.584 19:33:30 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:45.584 19:33:30 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:45.584 19:33:30 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:45.584 19:33:30 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:45.584 19:33:30 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BhiLPFhOqM 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BhiLPFhOqM 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BhiLPFhOqM 00:34:45.584 19:33:30 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.BhiLPFhOqM 00:34:45.584 19:33:30 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Mf5Wn7zXCs 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:45.584 19:33:30 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Mf5Wn7zXCs 00:34:45.584 19:33:30 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Mf5Wn7zXCs 00:34:45.584 19:33:30 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Mf5Wn7zXCs 00:34:45.584 19:33:30 keyring_file -- keyring/file.sh@30 -- # tgtpid=409209 00:34:45.584 19:33:30 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:45.584 19:33:30 keyring_file -- keyring/file.sh@32 -- # waitforlisten 409209 00:34:45.584 19:33:30 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 409209 ']' 00:34:45.584 19:33:30 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.584 19:33:30 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:45.584 19:33:30 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.584 19:33:30 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:45.584 19:33:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:45.844 [2024-12-06 19:33:30.667638] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:34:45.844 [2024-12-06 19:33:30.667749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid409209 ] 00:34:45.844 [2024-12-06 19:33:30.729245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.844 [2024-12-06 19:33:30.784178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:46.103 19:33:31 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:46.103 [2024-12-06 19:33:31.039225] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:46.103 null0 00:34:46.103 [2024-12-06 19:33:31.071268] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:46.103 [2024-12-06 19:33:31.071566] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.103 19:33:31 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:46.103 [2024-12-06 19:33:31.095313] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:46.103 request: 00:34:46.103 { 00:34:46.103 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:46.103 "secure_channel": false, 00:34:46.103 "listen_address": { 00:34:46.103 "trtype": "tcp", 00:34:46.103 "traddr": "127.0.0.1", 00:34:46.103 "trsvcid": "4420" 00:34:46.103 }, 00:34:46.103 "method": "nvmf_subsystem_add_listener", 00:34:46.103 "req_id": 1 00:34:46.103 } 00:34:46.103 Got JSON-RPC error response 00:34:46.103 response: 00:34:46.103 { 00:34:46.103 "code": -32602, 00:34:46.103 "message": "Invalid parameters" 00:34:46.103 } 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:46.103 19:33:31 keyring_file -- keyring/file.sh@47 -- # bperfpid=409218 00:34:46.103 19:33:31 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:46.103 19:33:31 keyring_file -- keyring/file.sh@49 -- # waitforlisten 409218 /var/tmp/bperf.sock 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 409218 ']' 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:46.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:46.103 19:33:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:46.103 [2024-12-06 19:33:31.143444] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:34:46.103 [2024-12-06 19:33:31.143510] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid409218 ] 00:34:46.361 [2024-12-06 19:33:31.210629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.361 [2024-12-06 19:33:31.270098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:46.361 19:33:31 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:46.361 19:33:31 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:46.361 19:33:31 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BhiLPFhOqM 00:34:46.361 19:33:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BhiLPFhOqM 00:34:46.619 19:33:31 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Mf5Wn7zXCs 00:34:46.619 19:33:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Mf5Wn7zXCs 00:34:46.877 19:33:31 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:34:46.877 19:33:31 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:46.877 19:33:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:46.877 19:33:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:46.877 19:33:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:47.544 19:33:32 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.BhiLPFhOqM == \/\t\m\p\/\t\m\p\.\B\h\i\L\P\F\h\O\q\M ]] 00:34:47.544 19:33:32 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:34:47.544 19:33:32 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:34:47.544 19:33:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:47.544 19:33:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:47.544 19:33:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:47.544 19:33:32 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Mf5Wn7zXCs == \/\t\m\p\/\t\m\p\.\M\f\5\W\n\7\z\X\C\s ]] 00:34:47.544 19:33:32 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:34:47.544 19:33:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:47.544 19:33:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:47.544 19:33:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:47.544 19:33:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:47.544 19:33:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:47.803 19:33:32 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:47.803 19:33:32 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:34:47.803 19:33:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:47.803 19:33:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:47.803 19:33:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:47.803 19:33:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:47.803 19:33:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:48.062 19:33:33 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:34:48.062 19:33:33 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:48.062 19:33:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:48.321 [2024-12-06 19:33:33.282886] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:48.321 nvme0n1 00:34:48.321 19:33:33 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:34:48.321 19:33:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:48.321 19:33:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:48.321 19:33:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:48.578 19:33:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:48.578 19:33:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:48.838 19:33:33 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:34:48.838 19:33:33 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:34:48.838 19:33:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:48.838 19:33:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:48.838 19:33:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:48.838 19:33:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:48.838 19:33:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:49.096 19:33:33 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:34:49.096 19:33:33 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:49.096 Running I/O for 1 seconds... 00:34:50.028 10465.00 IOPS, 40.88 MiB/s 00:34:50.028 Latency(us) 00:34:50.028 [2024-12-06T18:33:35.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.028 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:50.028 nvme0n1 : 1.01 10516.34 41.08 0.00 0.00 12136.31 7184.69 22136.60 00:34:50.028 [2024-12-06T18:33:35.077Z] =================================================================================================================== 00:34:50.028 [2024-12-06T18:33:35.077Z] Total : 10516.34 41.08 0.00 0.00 12136.31 7184.69 22136.60 00:34:50.028 { 00:34:50.028 "results": [ 00:34:50.028 { 00:34:50.028 "job": "nvme0n1", 00:34:50.028 "core_mask": "0x2", 00:34:50.028 "workload": "randrw", 00:34:50.028 "percentage": 50, 00:34:50.028 "status": "finished", 00:34:50.028 "queue_depth": 128, 00:34:50.028 "io_size": 4096, 00:34:50.028 "runtime": 1.00729, 00:34:50.028 "iops": 10516.3359112073, 00:34:50.028 "mibps": 41.07943715315351, 00:34:50.028 "io_failed": 0, 00:34:50.028 "io_timeout": 0, 00:34:50.028 "avg_latency_us": 12136.314627339509, 00:34:50.028 "min_latency_us": 7184.687407407408, 00:34:50.028 "max_latency_us": 22136.604444444445 00:34:50.028 } 00:34:50.028 ], 00:34:50.028 "core_count": 1 00:34:50.028 } 00:34:50.028 19:33:35 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:50.028 19:33:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:50.285 19:33:35 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:34:50.285 19:33:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:50.285 19:33:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:50.285 19:33:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:50.285 19:33:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:50.285 19:33:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:50.849 19:33:35 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:34:50.849 19:33:35 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:34:50.849 19:33:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:50.849 19:33:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:50.849 19:33:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:50.849 19:33:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:50.849 19:33:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:50.849 19:33:35 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:34:50.849 19:33:35 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:50.849 19:33:35 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:50.849 19:33:35 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:50.849 19:33:35 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:50.849 19:33:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:50.849 19:33:35 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:50.849 19:33:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:50.849 19:33:35 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:50.849 19:33:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:51.107 [2024-12-06 19:33:36.139419] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:51.107 [2024-12-06 19:33:36.139860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa407b0 (107): Transport endpoint is not connected 00:34:51.107 [2024-12-06 19:33:36.140853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa407b0 (9): Bad file descriptor 00:34:51.107 [2024-12-06 19:33:36.141852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:51.107 [2024-12-06 19:33:36.141873] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:51.107 [2024-12-06 19:33:36.141887] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:51.107 [2024-12-06 19:33:36.141901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:51.107 request: 00:34:51.107 { 00:34:51.107 "name": "nvme0", 00:34:51.107 "trtype": "tcp", 00:34:51.107 "traddr": "127.0.0.1", 00:34:51.107 "adrfam": "ipv4", 00:34:51.107 "trsvcid": "4420", 00:34:51.107 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:51.107 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:51.107 "prchk_reftag": false, 00:34:51.107 "prchk_guard": false, 00:34:51.107 "hdgst": false, 00:34:51.107 "ddgst": false, 00:34:51.107 "psk": "key1", 00:34:51.107 "allow_unrecognized_csi": false, 00:34:51.107 "method": "bdev_nvme_attach_controller", 00:34:51.107 "req_id": 1 00:34:51.107 } 00:34:51.107 Got JSON-RPC error response 00:34:51.107 response: 00:34:51.107 { 00:34:51.107 "code": -5, 00:34:51.107 "message": "Input/output error" 00:34:51.107 } 00:34:51.365 19:33:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:51.365 19:33:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:51.365 19:33:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:51.365 19:33:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:51.365 19:33:36 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:34:51.365 19:33:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:51.365 19:33:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:51.365 19:33:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:51.365 19:33:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:51.365 19:33:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:51.672 19:33:36 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:34:51.672 19:33:36 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:34:51.672 19:33:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:51.672 19:33:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:51.672 19:33:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:51.672 19:33:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:51.672 19:33:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:51.672 19:33:36 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:34:51.672 19:33:36 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:34:51.673 19:33:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:52.236 19:33:36 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:34:52.236 19:33:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:34:52.236 19:33:37 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:34:52.236 19:33:37 keyring_file -- keyring/file.sh@78 -- # jq length 00:34:52.236 19:33:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:52.505 19:33:37 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:34:52.506 19:33:37 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.BhiLPFhOqM 00:34:52.506 19:33:37 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.BhiLPFhOqM 00:34:52.506 19:33:37 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:52.506 19:33:37 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.BhiLPFhOqM 00:34:52.506 19:33:37 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:52.506 19:33:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:52.506 19:33:37 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:52.506 19:33:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:52.506 19:33:37 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BhiLPFhOqM 00:34:52.506 19:33:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BhiLPFhOqM 00:34:52.769 [2024-12-06 19:33:37.758794] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BhiLPFhOqM': 0100660 00:34:52.769 [2024-12-06 19:33:37.758831] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:34:52.769 request: 00:34:52.769 { 00:34:52.769 "name": "key0", 00:34:52.769 "path": "/tmp/tmp.BhiLPFhOqM", 00:34:52.769 "method": "keyring_file_add_key", 00:34:52.769 "req_id": 1 00:34:52.769 } 00:34:52.769 Got JSON-RPC error response 00:34:52.769 response: 00:34:52.769 { 00:34:52.769 "code": -1, 00:34:52.769 "message": "Operation not permitted" 00:34:52.769 } 00:34:52.769 19:33:37 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:52.769 19:33:37 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:52.769 19:33:37 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:52.769 19:33:37 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:52.769 19:33:37 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.BhiLPFhOqM 00:34:52.769 19:33:37 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BhiLPFhOqM 00:34:52.769 19:33:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BhiLPFhOqM 00:34:53.025 19:33:38 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.BhiLPFhOqM 00:34:53.025 19:33:38 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:34:53.025 19:33:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:53.025 19:33:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:53.026 19:33:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:53.026 19:33:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:53.026 19:33:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:53.591 19:33:38 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:34:53.591 19:33:38 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:53.591 19:33:38 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:53.591 19:33:38 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:53.591 19:33:38 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:53.591 19:33:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:53.591 19:33:38 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:53.591 19:33:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:53.592 19:33:38 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:53.592 19:33:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:53.592 [2024-12-06 19:33:38.581076] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.BhiLPFhOqM': No such file or directory 00:34:53.592 [2024-12-06 19:33:38.581118] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:34:53.592 [2024-12-06 19:33:38.581151] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:34:53.592 [2024-12-06 19:33:38.581175] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:34:53.592 [2024-12-06 19:33:38.581189] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:53.592 [2024-12-06 19:33:38.581201] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:34:53.592 request: 00:34:53.592 { 00:34:53.592 "name": "nvme0", 00:34:53.592 "trtype": "tcp", 00:34:53.592 "traddr": "127.0.0.1", 00:34:53.592 "adrfam": "ipv4", 00:34:53.592 "trsvcid": "4420", 00:34:53.592 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:53.592 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:53.592 "prchk_reftag": false, 00:34:53.592 "prchk_guard": false, 00:34:53.592 "hdgst": false, 00:34:53.592 "ddgst": false, 00:34:53.592 "psk": "key0", 00:34:53.592 "allow_unrecognized_csi": false, 00:34:53.592 "method": "bdev_nvme_attach_controller", 00:34:53.592 "req_id": 1 00:34:53.592 } 00:34:53.592 Got JSON-RPC error response 00:34:53.592 response: 00:34:53.592 { 00:34:53.592 "code": -19, 00:34:53.592 "message": "No such device" 00:34:53.592 } 00:34:53.592 19:33:38 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:53.592 19:33:38 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:53.592 19:33:38 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:53.592 19:33:38 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:53.592 19:33:38 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:34:53.592 19:33:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:53.850 19:33:38 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:53.850 19:33:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:53.850 19:33:38 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:53.850 19:33:38 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:53.850 19:33:38 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:53.850 19:33:38 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:53.850 19:33:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TKeis5QZKv 00:34:53.850 19:33:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:53.850 19:33:38 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:53.850 19:33:38 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:53.850 19:33:38 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:53.850 19:33:38 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:53.850 19:33:38 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:53.850 19:33:38 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:54.109 19:33:38 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TKeis5QZKv 00:34:54.109 19:33:38 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TKeis5QZKv 00:34:54.109 19:33:38 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.TKeis5QZKv 00:34:54.109 19:33:38 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TKeis5QZKv 00:34:54.109 19:33:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TKeis5QZKv 00:34:54.367 19:33:39 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:54.367 19:33:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:54.625 nvme0n1 00:34:54.626 19:33:39 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:34:54.626 19:33:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:54.626 19:33:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:54.626 19:33:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:54.626 19:33:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:54.626 19:33:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:54.884 19:33:39 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:34:54.884 19:33:39 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:34:54.884 19:33:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:55.142 19:33:40 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:34:55.142 19:33:40 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:34:55.142 19:33:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:55.142 19:33:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:55.142 19:33:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:55.401 19:33:40 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:34:55.401 19:33:40 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:34:55.401 19:33:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:55.401 19:33:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:55.401 19:33:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:55.401 19:33:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:55.401 19:33:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:55.661 19:33:40 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:34:55.661 19:33:40 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:55.661 19:33:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:55.919 19:33:40 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:34:55.919 19:33:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:55.919 19:33:40 keyring_file -- keyring/file.sh@105 -- # jq length 00:34:56.178 19:33:41 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:34:56.178 19:33:41 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TKeis5QZKv 00:34:56.178 19:33:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TKeis5QZKv 00:34:56.747 19:33:41 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Mf5Wn7zXCs 00:34:56.747 19:33:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Mf5Wn7zXCs 00:34:56.747 19:33:41 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:56.747 19:33:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:57.316 nvme0n1 00:34:57.316 19:33:42 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:34:57.316 19:33:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:34:57.576 19:33:42 keyring_file -- keyring/file.sh@113 -- # config='{ 00:34:57.576 "subsystems": [ 00:34:57.576 { 00:34:57.576 "subsystem": "keyring", 00:34:57.576 "config": [ 00:34:57.576 { 00:34:57.576 "method": "keyring_file_add_key", 00:34:57.576 "params": { 00:34:57.576 "name": "key0", 00:34:57.576 "path": "/tmp/tmp.TKeis5QZKv" 00:34:57.576 } 00:34:57.576 }, 00:34:57.576 { 00:34:57.576 "method": "keyring_file_add_key", 00:34:57.576 "params": { 00:34:57.576 "name": "key1", 00:34:57.576 "path": "/tmp/tmp.Mf5Wn7zXCs" 00:34:57.576 } 00:34:57.576 } 00:34:57.576 ] 00:34:57.576 }, 00:34:57.576 { 00:34:57.576 "subsystem": "iobuf", 00:34:57.576 "config": [ 00:34:57.576 { 00:34:57.576 "method": "iobuf_set_options", 00:34:57.576 "params": { 00:34:57.576 "small_pool_count": 8192, 00:34:57.576 "large_pool_count": 1024, 00:34:57.576 "small_bufsize": 8192, 00:34:57.576 "large_bufsize": 135168, 00:34:57.576 "enable_numa": false 00:34:57.576 } 00:34:57.576 } 00:34:57.576 ] 00:34:57.576 }, 00:34:57.576 { 00:34:57.576 "subsystem": "sock", 00:34:57.576 "config": [ 00:34:57.576 { 00:34:57.576 "method": "sock_set_default_impl", 00:34:57.576 "params": { 00:34:57.576 "impl_name": "posix" 00:34:57.576 } 00:34:57.576 }, 00:34:57.576 { 00:34:57.576 "method": "sock_impl_set_options", 00:34:57.576 "params": { 00:34:57.576 "impl_name": "ssl", 00:34:57.576 "recv_buf_size": 4096, 00:34:57.576 "send_buf_size": 4096, 00:34:57.576 "enable_recv_pipe": true, 00:34:57.576 "enable_quickack": false, 00:34:57.576 "enable_placement_id": 0, 00:34:57.576 "enable_zerocopy_send_server": true, 00:34:57.576 "enable_zerocopy_send_client": false, 00:34:57.576 "zerocopy_threshold": 0, 00:34:57.576 "tls_version": 0, 00:34:57.576 "enable_ktls": false 00:34:57.576 } 00:34:57.576 }, 00:34:57.576 { 00:34:57.576 "method": "sock_impl_set_options", 00:34:57.576 "params": { 00:34:57.576 "impl_name": "posix", 00:34:57.576 "recv_buf_size": 2097152, 00:34:57.576 "send_buf_size": 2097152, 00:34:57.576 "enable_recv_pipe": true, 00:34:57.576 "enable_quickack": false, 00:34:57.576 "enable_placement_id": 0, 00:34:57.576 "enable_zerocopy_send_server": true, 00:34:57.576 "enable_zerocopy_send_client": false, 00:34:57.576 "zerocopy_threshold": 0, 00:34:57.576 "tls_version": 0, 00:34:57.576 "enable_ktls": false 00:34:57.576 } 00:34:57.576 } 00:34:57.576 ] 00:34:57.576 }, 00:34:57.576 { 00:34:57.576 "subsystem": "vmd", 00:34:57.576 "config": [] 00:34:57.576 }, 00:34:57.576 { 00:34:57.576 "subsystem": "accel", 00:34:57.576 "config": [ 00:34:57.576 { 00:34:57.576 "method": "accel_set_options", 00:34:57.576 "params": { 00:34:57.576 "small_cache_size": 128, 00:34:57.576 "large_cache_size": 16, 00:34:57.576 "task_count": 2048, 00:34:57.576 "sequence_count": 2048, 00:34:57.576 "buf_count": 2048 00:34:57.576 } 00:34:57.576 } 00:34:57.576 ] 00:34:57.576 }, 00:34:57.576 { 00:34:57.576 "subsystem": "bdev", 00:34:57.576 "config": [ 00:34:57.576 { 00:34:57.576 "method": "bdev_set_options", 00:34:57.576 "params": { 00:34:57.576 "bdev_io_pool_size": 65535, 00:34:57.576 "bdev_io_cache_size": 256, 00:34:57.576 "bdev_auto_examine": true, 00:34:57.576 "iobuf_small_cache_size": 128, 00:34:57.576 "iobuf_large_cache_size": 16 00:34:57.576 } 00:34:57.576 }, 00:34:57.576 { 00:34:57.576 "method": "bdev_raid_set_options", 00:34:57.576 "params": { 00:34:57.576 "process_window_size_kb": 1024, 00:34:57.576 "process_max_bandwidth_mb_sec": 0 00:34:57.576 } 00:34:57.576 }, 00:34:57.576 { 00:34:57.576 "method": "bdev_iscsi_set_options", 00:34:57.576 "params": { 00:34:57.576 "timeout_sec": 30 00:34:57.576 } 00:34:57.576 }, 00:34:57.576 { 00:34:57.576 "method": "bdev_nvme_set_options", 00:34:57.576 "params": { 00:34:57.576 "action_on_timeout": "none", 00:34:57.576 "timeout_us": 0, 00:34:57.576 "timeout_admin_us": 0, 00:34:57.576 "keep_alive_timeout_ms": 10000, 00:34:57.576 "arbitration_burst": 0, 00:34:57.576 "low_priority_weight": 0, 00:34:57.576 "medium_priority_weight": 0, 00:34:57.576 "high_priority_weight": 0, 00:34:57.576 "nvme_adminq_poll_period_us": 10000, 00:34:57.576 "nvme_ioq_poll_period_us": 0, 00:34:57.576 "io_queue_requests": 512, 00:34:57.576 "delay_cmd_submit": true, 00:34:57.576 "transport_retry_count": 4, 00:34:57.576 "bdev_retry_count": 3, 00:34:57.576 "transport_ack_timeout": 0, 00:34:57.576 "ctrlr_loss_timeout_sec": 0, 00:34:57.576 "reconnect_delay_sec": 0, 00:34:57.576 "fast_io_fail_timeout_sec": 0, 00:34:57.576 "disable_auto_failback": false, 00:34:57.576 "generate_uuids": false, 00:34:57.576 "transport_tos": 0, 00:34:57.576 "nvme_error_stat": false, 00:34:57.576 "rdma_srq_size": 0, 00:34:57.576 "io_path_stat": false, 00:34:57.576 "allow_accel_sequence": false, 00:34:57.576 "rdma_max_cq_size": 0, 00:34:57.576 "rdma_cm_event_timeout_ms": 0, 00:34:57.576 "dhchap_digests": [ 00:34:57.576 "sha256", 00:34:57.576 "sha384", 00:34:57.576 "sha512" 00:34:57.576 ], 00:34:57.576 "dhchap_dhgroups": [ 00:34:57.576 "null", 00:34:57.576 "ffdhe2048", 00:34:57.576 "ffdhe3072", 00:34:57.576 "ffdhe4096", 00:34:57.576 "ffdhe6144", 00:34:57.576 "ffdhe8192" 00:34:57.576 ] 00:34:57.576 } 00:34:57.576 }, 00:34:57.576 { 00:34:57.576 "method": "bdev_nvme_attach_controller", 00:34:57.576 "params": { 00:34:57.576 "name": "nvme0", 00:34:57.576 "trtype": "TCP", 00:34:57.576 "adrfam": "IPv4", 00:34:57.576 "traddr": "127.0.0.1", 00:34:57.576 "trsvcid": "4420", 00:34:57.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:57.576 "prchk_reftag": false, 00:34:57.576 "prchk_guard": false, 00:34:57.576 "ctrlr_loss_timeout_sec": 0, 00:34:57.576 "reconnect_delay_sec": 0, 00:34:57.576 "fast_io_fail_timeout_sec": 0, 00:34:57.576 "psk": "key0", 00:34:57.576 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:57.576 "hdgst": false, 00:34:57.576 "ddgst": false, 00:34:57.576 "multipath": "multipath" 00:34:57.576 } 00:34:57.576 }, 00:34:57.576 { 00:34:57.576 "method": "bdev_nvme_set_hotplug", 00:34:57.577 "params": { 00:34:57.577 "period_us": 100000, 00:34:57.577 "enable": false 00:34:57.577 } 00:34:57.577 }, 00:34:57.577 { 00:34:57.577 "method": "bdev_wait_for_examine" 00:34:57.577 } 00:34:57.577 ] 00:34:57.577 }, 00:34:57.577 { 00:34:57.577 "subsystem": "nbd", 00:34:57.577 "config": [] 00:34:57.577 } 00:34:57.577 ] 00:34:57.577 }' 00:34:57.577 19:33:42 keyring_file -- keyring/file.sh@115 -- # killprocess 409218 00:34:57.577 19:33:42 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 409218 ']' 00:34:57.577 19:33:42 keyring_file -- common/autotest_common.sh@958 -- # kill -0 409218 00:34:57.577 19:33:42 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:57.577 19:33:42 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:57.577 19:33:42 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 409218 00:34:57.577 19:33:42 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:57.577 19:33:42 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:57.577 19:33:42 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 409218' 00:34:57.577 killing process with pid 409218 00:34:57.577 19:33:42 keyring_file -- common/autotest_common.sh@973 -- # kill 409218 00:34:57.577 Received shutdown signal, test time was about 1.000000 seconds 00:34:57.577 00:34:57.577 Latency(us) 00:34:57.577 [2024-12-06T18:33:42.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.577 [2024-12-06T18:33:42.626Z] =================================================================================================================== 00:34:57.577 [2024-12-06T18:33:42.626Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:57.577 19:33:42 keyring_file -- common/autotest_common.sh@978 -- # wait 409218 00:34:57.836 19:33:42 keyring_file -- keyring/file.sh@118 -- # bperfpid=410809 00:34:57.836 19:33:42 keyring_file -- keyring/file.sh@120 -- # waitforlisten 410809 /var/tmp/bperf.sock 00:34:57.836 19:33:42 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 410809 ']' 00:34:57.836 19:33:42 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:57.836 19:33:42 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:34:57.836 19:33:42 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:57.836 19:33:42 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:57.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:57.836 19:33:42 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:57.836 19:33:42 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:34:57.836 "subsystems": [ 00:34:57.836 { 00:34:57.836 "subsystem": "keyring", 00:34:57.836 "config": [ 00:34:57.836 { 00:34:57.836 "method": "keyring_file_add_key", 00:34:57.836 "params": { 00:34:57.836 "name": "key0", 00:34:57.836 "path": "/tmp/tmp.TKeis5QZKv" 00:34:57.836 } 00:34:57.836 }, 00:34:57.836 { 00:34:57.836 "method": "keyring_file_add_key", 00:34:57.836 "params": { 00:34:57.836 "name": "key1", 00:34:57.836 "path": "/tmp/tmp.Mf5Wn7zXCs" 00:34:57.836 } 00:34:57.836 } 00:34:57.836 ] 00:34:57.836 }, 00:34:57.836 { 00:34:57.836 "subsystem": "iobuf", 00:34:57.836 "config": [ 00:34:57.836 { 00:34:57.836 "method": "iobuf_set_options", 00:34:57.836 "params": { 00:34:57.836 "small_pool_count": 8192, 00:34:57.836 "large_pool_count": 1024, 00:34:57.836 "small_bufsize": 8192, 00:34:57.836 "large_bufsize": 135168, 00:34:57.836 "enable_numa": false 00:34:57.836 } 00:34:57.836 } 00:34:57.836 ] 00:34:57.836 }, 00:34:57.836 { 00:34:57.836 "subsystem": "sock", 00:34:57.836 "config": [ 00:34:57.836 { 00:34:57.836 "method": "sock_set_default_impl", 00:34:57.836 "params": { 00:34:57.836 "impl_name": "posix" 00:34:57.836 } 00:34:57.836 }, 00:34:57.836 { 00:34:57.836 "method": "sock_impl_set_options", 00:34:57.836 "params": { 00:34:57.836 "impl_name": "ssl", 00:34:57.836 "recv_buf_size": 4096, 00:34:57.836 "send_buf_size": 4096, 00:34:57.836 "enable_recv_pipe": true, 00:34:57.836 "enable_quickack": false, 00:34:57.836 "enable_placement_id": 0, 00:34:57.836 "enable_zerocopy_send_server": true, 00:34:57.836 "enable_zerocopy_send_client": false, 00:34:57.836 "zerocopy_threshold": 0, 00:34:57.836 "tls_version": 0, 00:34:57.836 "enable_ktls": false 00:34:57.836 } 00:34:57.836 }, 00:34:57.836 { 00:34:57.836 "method": "sock_impl_set_options", 00:34:57.836 "params": { 00:34:57.836 "impl_name": "posix", 00:34:57.836 "recv_buf_size": 2097152, 00:34:57.836 "send_buf_size": 2097152, 00:34:57.836 "enable_recv_pipe": true, 00:34:57.836 "enable_quickack": false, 00:34:57.836 "enable_placement_id": 0, 00:34:57.836 "enable_zerocopy_send_server": true, 00:34:57.836 "enable_zerocopy_send_client": false, 00:34:57.836 "zerocopy_threshold": 0, 00:34:57.836 "tls_version": 0, 00:34:57.836 "enable_ktls": false 00:34:57.836 } 00:34:57.836 } 00:34:57.836 ] 00:34:57.836 }, 00:34:57.836 { 00:34:57.836 "subsystem": "vmd", 00:34:57.836 "config": [] 00:34:57.836 }, 00:34:57.836 { 00:34:57.836 "subsystem": "accel", 00:34:57.836 "config": [ 00:34:57.836 { 00:34:57.836 "method": "accel_set_options", 00:34:57.836 "params": { 00:34:57.836 "small_cache_size": 128, 00:34:57.836 "large_cache_size": 16, 00:34:57.836 "task_count": 2048, 00:34:57.836 "sequence_count": 2048, 00:34:57.836 "buf_count": 2048 00:34:57.836 } 00:34:57.836 } 00:34:57.836 ] 00:34:57.836 }, 00:34:57.836 { 00:34:57.836 "subsystem": "bdev", 00:34:57.836 "config": [ 00:34:57.836 { 00:34:57.836 "method": "bdev_set_options", 00:34:57.836 "params": { 00:34:57.836 "bdev_io_pool_size": 65535, 00:34:57.836 "bdev_io_cache_size": 256, 00:34:57.836 "bdev_auto_examine": true, 00:34:57.836 "iobuf_small_cache_size": 128, 00:34:57.836 "iobuf_large_cache_size": 16 00:34:57.836 } 00:34:57.836 }, 00:34:57.836 { 00:34:57.836 "method": "bdev_raid_set_options", 00:34:57.836 "params": { 00:34:57.836 "process_window_size_kb": 1024, 00:34:57.836 "process_max_bandwidth_mb_sec": 0 00:34:57.836 } 00:34:57.836 }, 00:34:57.836 { 00:34:57.836 "method": "bdev_iscsi_set_options", 00:34:57.836 "params": { 00:34:57.836 "timeout_sec": 30 00:34:57.836 } 00:34:57.836 }, 00:34:57.836 { 00:34:57.836 "method": "bdev_nvme_set_options", 00:34:57.836 "params": { 00:34:57.836 "action_on_timeout": "none", 00:34:57.836 "timeout_us": 0, 00:34:57.836 "timeout_admin_us": 0, 00:34:57.836 "keep_alive_timeout_ms": 10000, 00:34:57.836 "arbitration_burst": 0, 00:34:57.836 "low_priority_weight": 0, 00:34:57.836 "medium_priority_weight": 0, 00:34:57.836 "high_priority_weight": 0, 00:34:57.836 "nvme_adminq_poll_period_us": 10000, 00:34:57.836 "nvme_ioq_poll_period_us": 0, 00:34:57.836 "io_queue_requests": 512, 00:34:57.836 "delay_cmd_submit": true, 00:34:57.836 "transport_retry_count": 4, 00:34:57.836 "bdev_retry_count": 3, 00:34:57.836 "transport_ack_timeout": 0, 00:34:57.836 "ctrlr_loss_timeout_sec": 0, 00:34:57.836 "reconnect_delay_sec": 0, 00:34:57.836 "fast_io_fail_timeout_sec": 0, 00:34:57.837 "disable_auto_failback": false, 00:34:57.837 "generate_uuids": false, 00:34:57.837 "transport_tos": 0, 00:34:57.837 "nvme_error_stat": false, 00:34:57.837 "rdma_srq_size": 0, 00:34:57.837 "io_path_stat": false, 00:34:57.837 "allow_accel_sequence": false, 00:34:57.837 "rdma_max_cq_size": 0, 00:34:57.837 "rdma_cm_event_timeout_ms": 0, 00:34:57.837 "dhchap_digests": [ 00:34:57.837 "sha256", 00:34:57.837 "sha384", 00:34:57.837 "sha512" 00:34:57.837 ], 00:34:57.837 "dhchap_dhgroups": [ 00:34:57.837 "null", 00:34:57.837 "ffdhe2048", 00:34:57.837 "ffdhe3072", 00:34:57.837 "ffdhe4096", 00:34:57.837 "ffdhe6144", 00:34:57.837 "ffdhe8192" 00:34:57.837 ] 00:34:57.837 } 00:34:57.837 }, 00:34:57.837 { 00:34:57.837 "method": "bdev_nvme_attach_controller", 00:34:57.837 "params": { 00:34:57.837 "name": "nvme0", 00:34:57.837 "trtype": "TCP", 00:34:57.837 "adrfam": "IPv4", 00:34:57.837 "traddr": "127.0.0.1", 00:34:57.837 "trsvcid": "4420", 00:34:57.837 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:57.837 "prchk_reftag": false, 00:34:57.837 "prchk_guard": false, 00:34:57.837 "ctrlr_loss_timeout_sec": 0, 00:34:57.837 "reconnect_delay_sec": 0, 00:34:57.837 "fast_io_fail_timeout_sec": 0, 00:34:57.837 "psk": "key0", 00:34:57.837 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:57.837 "hdgst": false, 00:34:57.837 "ddgst": false, 00:34:57.837 "multipath": "multipath" 00:34:57.837 } 00:34:57.837 }, 00:34:57.837 { 00:34:57.837 "method": "bdev_nvme_set_hotplug", 00:34:57.837 "params": { 00:34:57.837 "period_us": 100000, 00:34:57.837 "enable": false 00:34:57.837 } 00:34:57.837 }, 00:34:57.837 { 00:34:57.837 "method": "bdev_wait_for_examine" 00:34:57.837 } 00:34:57.837 ] 00:34:57.837 }, 00:34:57.837 { 00:34:57.837 "subsystem": "nbd", 00:34:57.837 "config": [] 00:34:57.837 } 00:34:57.837 ] 00:34:57.837 }' 00:34:57.837 19:33:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:57.837 [2024-12-06 19:33:42.714101] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:34:57.837 [2024-12-06 19:33:42.714183] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid410809 ] 00:34:57.837 [2024-12-06 19:33:42.778737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.837 [2024-12-06 19:33:42.835351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:58.095 [2024-12-06 19:33:43.014535] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:58.095 19:33:43 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:58.095 19:33:43 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:58.095 19:33:43 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:34:58.095 19:33:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:58.095 19:33:43 keyring_file -- keyring/file.sh@121 -- # jq length 00:34:58.354 19:33:43 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:34:58.354 19:33:43 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:34:58.612 19:33:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:58.612 19:33:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:58.612 19:33:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:58.612 19:33:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:58.612 19:33:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:58.870 19:33:43 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:34:58.870 19:33:43 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:34:58.870 19:33:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:58.870 19:33:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:58.870 19:33:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:58.870 19:33:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:58.870 19:33:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:59.128 19:33:43 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:34:59.128 19:33:43 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:34:59.128 19:33:43 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:34:59.128 19:33:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:34:59.386 19:33:44 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:34:59.386 19:33:44 keyring_file -- keyring/file.sh@1 -- # cleanup 00:34:59.386 19:33:44 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.TKeis5QZKv /tmp/tmp.Mf5Wn7zXCs 00:34:59.386 19:33:44 keyring_file -- keyring/file.sh@20 -- # killprocess 410809 00:34:59.387 19:33:44 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 410809 ']' 00:34:59.387 19:33:44 keyring_file -- common/autotest_common.sh@958 -- # kill -0 410809 00:34:59.387 19:33:44 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:59.387 19:33:44 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:59.387 19:33:44 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 410809 00:34:59.387 19:33:44 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:59.387 19:33:44 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:59.387 19:33:44 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 410809' 00:34:59.387 killing process with pid 410809 00:34:59.387 19:33:44 keyring_file -- common/autotest_common.sh@973 -- # kill 410809 00:34:59.387 Received shutdown signal, test time was about 1.000000 seconds 00:34:59.387 00:34:59.387 Latency(us) 00:34:59.387 [2024-12-06T18:33:44.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.387 [2024-12-06T18:33:44.436Z] =================================================================================================================== 00:34:59.387 [2024-12-06T18:33:44.436Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:59.387 19:33:44 keyring_file -- common/autotest_common.sh@978 -- # wait 410809 00:34:59.645 19:33:44 keyring_file -- keyring/file.sh@21 -- # killprocess 409209 00:34:59.645 19:33:44 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 409209 ']' 00:34:59.645 19:33:44 keyring_file -- common/autotest_common.sh@958 -- # kill -0 409209 00:34:59.645 19:33:44 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:59.645 19:33:44 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:59.645 19:33:44 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 409209 00:34:59.645 19:33:44 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:59.645 19:33:44 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:59.645 19:33:44 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 409209' 00:34:59.645 killing process with pid 409209 00:34:59.645 19:33:44 keyring_file -- common/autotest_common.sh@973 -- # kill 409209 00:34:59.645 19:33:44 keyring_file -- common/autotest_common.sh@978 -- # wait 409209 00:34:59.903 00:34:59.903 real 0m14.502s 00:34:59.903 user 0m37.062s 00:34:59.903 sys 0m3.166s 00:34:59.903 19:33:44 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:59.903 19:33:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:59.903 ************************************ 00:34:59.903 END TEST keyring_file 00:34:59.903 ************************************ 00:34:59.903 19:33:44 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:34:59.903 19:33:44 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:59.903 19:33:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:59.903 19:33:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:59.903 19:33:44 -- common/autotest_common.sh@10 -- # set +x 00:34:59.903 ************************************ 00:34:59.903 START TEST keyring_linux 00:34:59.903 ************************************ 00:34:59.903 19:33:44 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:59.903 Joined session keyring: 156777889 00:35:00.162 * Looking for test storage... 00:35:00.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:00.162 19:33:44 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:00.162 19:33:44 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:35:00.162 19:33:44 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:00.162 19:33:45 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:00.162 19:33:45 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:00.163 19:33:45 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:00.163 19:33:45 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:00.163 19:33:45 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:00.163 19:33:45 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:00.163 19:33:45 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:00.163 19:33:45 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:00.163 19:33:45 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:00.163 19:33:45 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:00.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.163 --rc genhtml_branch_coverage=1 00:35:00.163 --rc genhtml_function_coverage=1 00:35:00.163 --rc genhtml_legend=1 00:35:00.163 --rc geninfo_all_blocks=1 00:35:00.163 --rc geninfo_unexecuted_blocks=1 00:35:00.163 00:35:00.163 ' 00:35:00.163 19:33:45 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:00.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.163 --rc genhtml_branch_coverage=1 00:35:00.163 --rc genhtml_function_coverage=1 00:35:00.163 --rc genhtml_legend=1 00:35:00.163 --rc geninfo_all_blocks=1 00:35:00.163 --rc geninfo_unexecuted_blocks=1 00:35:00.163 00:35:00.163 ' 00:35:00.163 19:33:45 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:00.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.163 --rc genhtml_branch_coverage=1 00:35:00.163 --rc genhtml_function_coverage=1 00:35:00.163 --rc genhtml_legend=1 00:35:00.163 --rc geninfo_all_blocks=1 00:35:00.163 --rc geninfo_unexecuted_blocks=1 00:35:00.163 00:35:00.163 ' 00:35:00.163 19:33:45 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:00.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.163 --rc genhtml_branch_coverage=1 00:35:00.163 --rc genhtml_function_coverage=1 00:35:00.163 --rc genhtml_legend=1 00:35:00.163 --rc geninfo_all_blocks=1 00:35:00.163 --rc geninfo_unexecuted_blocks=1 00:35:00.163 00:35:00.163 ' 00:35:00.163 19:33:45 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:00.163 19:33:45 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:00.163 19:33:45 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:00.163 19:33:45 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.163 19:33:45 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.163 19:33:45 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.163 19:33:45 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.163 19:33:45 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.163 19:33:45 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:00.163 19:33:45 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:00.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:00.163 19:33:45 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:00.163 19:33:45 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:00.163 19:33:45 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:00.163 19:33:45 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:00.163 19:33:45 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:00.163 19:33:45 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:00.163 /tmp/:spdk-test:key0 00:35:00.163 19:33:45 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:00.163 19:33:45 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:00.163 19:33:45 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:00.163 /tmp/:spdk-test:key1 00:35:00.163 19:33:45 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=411168 00:35:00.163 19:33:45 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:00.163 19:33:45 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 411168 00:35:00.163 19:33:45 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 411168 ']' 00:35:00.163 19:33:45 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:00.163 19:33:45 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:00.163 19:33:45 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:00.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:00.163 19:33:45 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:00.163 19:33:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:00.423 [2024-12-06 19:33:45.229617] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:35:00.423 [2024-12-06 19:33:45.229747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411168 ] 00:35:00.423 [2024-12-06 19:33:45.295289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.423 [2024-12-06 19:33:45.354963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:00.681 19:33:45 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:00.681 19:33:45 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:00.681 19:33:45 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:00.681 19:33:45 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.681 19:33:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:00.681 [2024-12-06 19:33:45.632101] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:00.681 null0 00:35:00.681 [2024-12-06 19:33:45.664145] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:00.681 [2024-12-06 19:33:45.664647] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:00.681 19:33:45 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.681 19:33:45 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:00.681 192403606 00:35:00.681 19:33:45 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:00.681 649732109 00:35:00.681 19:33:45 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=411182 00:35:00.681 19:33:45 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:00.681 19:33:45 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 411182 /var/tmp/bperf.sock 00:35:00.681 19:33:45 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 411182 ']' 00:35:00.681 19:33:45 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:00.681 19:33:45 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:00.681 19:33:45 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:00.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:00.681 19:33:45 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:00.681 19:33:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:00.939 [2024-12-06 19:33:45.730435] Starting SPDK v25.01-pre git sha1 0787c2b4e / DPDK 24.03.0 initialization... 00:35:00.939 [2024-12-06 19:33:45.730503] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411182 ] 00:35:00.939 [2024-12-06 19:33:45.797191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.939 [2024-12-06 19:33:45.858397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:00.939 19:33:45 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:00.939 19:33:45 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:00.939 19:33:45 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:00.939 19:33:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:01.197 19:33:46 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:01.197 19:33:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:01.765 19:33:46 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:01.765 19:33:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:02.024 [2024-12-06 19:33:46.847384] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:02.024 nvme0n1 00:35:02.024 19:33:46 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:02.024 19:33:46 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:02.024 19:33:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:02.024 19:33:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:02.024 19:33:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:02.024 19:33:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:02.283 19:33:47 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:02.283 19:33:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:02.283 19:33:47 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:02.283 19:33:47 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:02.283 19:33:47 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:02.283 19:33:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:02.283 19:33:47 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:02.541 19:33:47 keyring_linux -- keyring/linux.sh@25 -- # sn=192403606 00:35:02.541 19:33:47 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:02.541 19:33:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:02.541 19:33:47 keyring_linux -- keyring/linux.sh@26 -- # [[ 192403606 == \1\9\2\4\0\3\6\0\6 ]] 00:35:02.541 19:33:47 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 192403606 00:35:02.541 19:33:47 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:02.541 19:33:47 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:02.800 Running I/O for 1 seconds... 00:35:03.736 11558.00 IOPS, 45.15 MiB/s 00:35:03.736 Latency(us) 00:35:03.736 [2024-12-06T18:33:48.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.736 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:03.736 nvme0n1 : 1.01 11559.26 45.15 0.00 0.00 11004.89 6165.24 17185.00 00:35:03.736 [2024-12-06T18:33:48.785Z] =================================================================================================================== 00:35:03.736 [2024-12-06T18:33:48.785Z] Total : 11559.26 45.15 0.00 0.00 11004.89 6165.24 17185.00 00:35:03.736 { 00:35:03.736 "results": [ 00:35:03.736 { 00:35:03.736 "job": "nvme0n1", 00:35:03.736 "core_mask": "0x2", 00:35:03.736 "workload": "randread", 00:35:03.736 "status": "finished", 00:35:03.736 "queue_depth": 128, 00:35:03.736 "io_size": 4096, 00:35:03.737 "runtime": 1.011051, 00:35:03.737 "iops": 11559.258632848392, 00:35:03.737 "mibps": 45.15335403456403, 00:35:03.737 "io_failed": 0, 00:35:03.737 "io_timeout": 0, 00:35:03.737 "avg_latency_us": 11004.889632735329, 00:35:03.737 "min_latency_us": 6165.2385185185185, 00:35:03.737 "max_latency_us": 17184.995555555557 00:35:03.737 } 00:35:03.737 ], 00:35:03.737 "core_count": 1 00:35:03.737 } 00:35:03.737 19:33:48 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:03.737 19:33:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:03.994 19:33:48 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:03.994 19:33:48 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:03.994 19:33:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:03.994 19:33:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:03.994 19:33:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:03.994 19:33:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:04.253 19:33:49 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:04.253 19:33:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:04.253 19:33:49 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:04.253 19:33:49 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:04.253 19:33:49 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:04.253 19:33:49 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:04.253 19:33:49 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:04.253 19:33:49 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:04.253 19:33:49 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:04.253 19:33:49 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:04.253 19:33:49 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:04.253 19:33:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:04.514 [2024-12-06 19:33:49.438563] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:04.514 [2024-12-06 19:33:49.438758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b560 (107): Transport endpoint is not connected 00:35:04.514 [2024-12-06 19:33:49.439755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b560 (9): Bad file descriptor 00:35:04.514 [2024-12-06 19:33:49.440755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:04.514 [2024-12-06 19:33:49.440786] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:04.514 [2024-12-06 19:33:49.440802] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:04.514 [2024-12-06 19:33:49.440818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:04.514 request: 00:35:04.514 { 00:35:04.514 "name": "nvme0", 00:35:04.514 "trtype": "tcp", 00:35:04.514 "traddr": "127.0.0.1", 00:35:04.514 "adrfam": "ipv4", 00:35:04.514 "trsvcid": "4420", 00:35:04.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:04.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:04.514 "prchk_reftag": false, 00:35:04.514 "prchk_guard": false, 00:35:04.514 "hdgst": false, 00:35:04.514 "ddgst": false, 00:35:04.514 "psk": ":spdk-test:key1", 00:35:04.514 "allow_unrecognized_csi": false, 00:35:04.514 "method": "bdev_nvme_attach_controller", 00:35:04.514 "req_id": 1 00:35:04.514 } 00:35:04.514 Got JSON-RPC error response 00:35:04.514 response: 00:35:04.514 { 00:35:04.514 "code": -5, 00:35:04.514 "message": "Input/output error" 00:35:04.514 } 00:35:04.514 19:33:49 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:04.514 19:33:49 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:04.514 19:33:49 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:04.514 19:33:49 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:04.514 19:33:49 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:04.514 19:33:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:04.514 19:33:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:04.514 19:33:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:04.514 19:33:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:04.514 19:33:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:04.514 19:33:49 keyring_linux -- keyring/linux.sh@33 -- # sn=192403606 00:35:04.514 19:33:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 192403606 00:35:04.514 1 links removed 00:35:04.514 19:33:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:04.514 19:33:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:04.514 19:33:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:04.514 19:33:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:04.514 19:33:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:04.514 19:33:49 keyring_linux -- keyring/linux.sh@33 -- # sn=649732109 00:35:04.514 19:33:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 649732109 00:35:04.514 1 links removed 00:35:04.514 19:33:49 keyring_linux -- keyring/linux.sh@41 -- # killprocess 411182 00:35:04.514 19:33:49 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 411182 ']' 00:35:04.514 19:33:49 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 411182 00:35:04.514 19:33:49 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:04.514 19:33:49 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:04.514 19:33:49 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 411182 00:35:04.514 19:33:49 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:04.514 19:33:49 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:04.514 19:33:49 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 411182' 00:35:04.514 killing process with pid 411182 00:35:04.514 19:33:49 keyring_linux -- common/autotest_common.sh@973 -- # kill 411182 00:35:04.514 Received shutdown signal, test time was about 1.000000 seconds 00:35:04.514 00:35:04.515 Latency(us) 00:35:04.515 [2024-12-06T18:33:49.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.515 [2024-12-06T18:33:49.564Z] =================================================================================================================== 00:35:04.515 [2024-12-06T18:33:49.564Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:04.515 19:33:49 keyring_linux -- common/autotest_common.sh@978 -- # wait 411182 00:35:04.775 19:33:49 keyring_linux -- keyring/linux.sh@42 -- # killprocess 411168 00:35:04.775 19:33:49 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 411168 ']' 00:35:04.775 19:33:49 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 411168 00:35:04.775 19:33:49 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:04.775 19:33:49 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:04.775 19:33:49 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 411168 00:35:04.775 19:33:49 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:04.775 19:33:49 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:04.775 19:33:49 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 411168' 00:35:04.775 killing process with pid 411168 00:35:04.775 19:33:49 keyring_linux -- common/autotest_common.sh@973 -- # kill 411168 00:35:04.775 19:33:49 keyring_linux -- common/autotest_common.sh@978 -- # wait 411168 00:35:05.344 00:35:05.344 real 0m5.275s 00:35:05.344 user 0m10.452s 00:35:05.344 sys 0m1.612s 00:35:05.344 19:33:50 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:05.344 19:33:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:05.344 ************************************ 00:35:05.344 END TEST keyring_linux 00:35:05.344 ************************************ 00:35:05.344 19:33:50 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:05.344 19:33:50 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:05.344 19:33:50 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:05.344 19:33:50 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:05.344 19:33:50 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:05.344 19:33:50 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:05.344 19:33:50 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:05.344 19:33:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:05.344 19:33:50 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:05.344 19:33:50 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:05.344 19:33:50 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:05.344 19:33:50 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:05.344 19:33:50 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:05.344 19:33:50 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:05.344 19:33:50 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:05.344 19:33:50 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:05.344 19:33:50 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:05.344 19:33:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:05.344 19:33:50 -- common/autotest_common.sh@10 -- # set +x 00:35:05.344 19:33:50 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:05.344 19:33:50 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:05.344 19:33:50 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:05.344 19:33:50 -- common/autotest_common.sh@10 -- # set +x 00:35:07.266 INFO: APP EXITING 00:35:07.266 INFO: killing all VMs 00:35:07.266 INFO: killing vhost app 00:35:07.266 INFO: EXIT DONE 00:35:08.642 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:35:08.642 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:35:08.642 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:35:08.642 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:35:08.642 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:35:08.642 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:35:08.642 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:35:08.642 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:35:08.642 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:35:08.642 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:35:08.642 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:35:08.642 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:35:08.642 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:35:08.642 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:35:08.642 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:35:08.642 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:35:08.642 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:35:10.020 Cleaning 00:35:10.020 Removing: /var/run/dpdk/spdk0/config 00:35:10.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:10.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:10.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:10.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:10.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:10.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:10.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:10.020 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:10.020 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:10.020 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:10.020 Removing: /var/run/dpdk/spdk1/config 00:35:10.020 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:10.020 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:10.020 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:10.020 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:10.020 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:10.020 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:10.020 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:10.020 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:10.020 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:10.020 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:10.020 Removing: /var/run/dpdk/spdk2/config 00:35:10.020 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:10.020 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:10.020 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:10.020 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:10.020 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:10.020 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:10.020 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:10.020 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:10.020 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:10.020 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:10.020 Removing: /var/run/dpdk/spdk3/config 00:35:10.020 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:10.020 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:10.020 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:10.020 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:10.020 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:10.020 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:10.021 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:10.021 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:10.021 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:10.021 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:10.021 Removing: /var/run/dpdk/spdk4/config 00:35:10.021 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:10.021 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:10.021 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:10.021 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:10.021 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:10.021 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:10.021 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:10.021 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:10.021 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:10.021 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:10.021 Removing: /dev/shm/bdev_svc_trace.1 00:35:10.021 Removing: /dev/shm/nvmf_trace.0 00:35:10.021 Removing: /dev/shm/spdk_tgt_trace.pid86524 00:35:10.021 Removing: /var/run/dpdk/spdk0 00:35:10.021 Removing: /var/run/dpdk/spdk1 00:35:10.021 Removing: /var/run/dpdk/spdk2 00:35:10.021 Removing: /var/run/dpdk/spdk3 00:35:10.021 Removing: /var/run/dpdk/spdk4 00:35:10.021 Removing: /var/run/dpdk/spdk_pid100084 00:35:10.021 Removing: /var/run/dpdk/spdk_pid103203 00:35:10.021 Removing: /var/run/dpdk/spdk_pid110359 00:35:10.021 Removing: /var/run/dpdk/spdk_pid110767 00:35:10.021 Removing: /var/run/dpdk/spdk_pid113308 00:35:10.021 Removing: /var/run/dpdk/spdk_pid113580 00:35:10.021 Removing: /var/run/dpdk/spdk_pid116123 00:35:10.021 Removing: /var/run/dpdk/spdk_pid119907 00:35:10.021 Removing: /var/run/dpdk/spdk_pid122045 00:35:10.021 Removing: /var/run/dpdk/spdk_pid128515 00:35:10.021 Removing: /var/run/dpdk/spdk_pid133908 00:35:10.021 Removing: /var/run/dpdk/spdk_pid135113 00:35:10.021 Removing: /var/run/dpdk/spdk_pid135900 00:35:10.021 Removing: /var/run/dpdk/spdk_pid146954 00:35:10.021 Removing: /var/run/dpdk/spdk_pid149358 00:35:10.021 Removing: /var/run/dpdk/spdk_pid176944 00:35:10.021 Removing: /var/run/dpdk/spdk_pid180762 00:35:10.021 Removing: /var/run/dpdk/spdk_pid184602 00:35:10.021 Removing: /var/run/dpdk/spdk_pid189000 00:35:10.021 Removing: /var/run/dpdk/spdk_pid189006 00:35:10.021 Removing: /var/run/dpdk/spdk_pid189665 00:35:10.021 Removing: /var/run/dpdk/spdk_pid190261 00:35:10.021 Removing: /var/run/dpdk/spdk_pid190875 00:35:10.021 Removing: /var/run/dpdk/spdk_pid191271 00:35:10.021 Removing: /var/run/dpdk/spdk_pid191296 00:35:10.021 Removing: /var/run/dpdk/spdk_pid191537 00:35:10.021 Removing: /var/run/dpdk/spdk_pid191674 00:35:10.021 Removing: /var/run/dpdk/spdk_pid191686 00:35:10.021 Removing: /var/run/dpdk/spdk_pid192340 00:35:10.021 Removing: /var/run/dpdk/spdk_pid192888 00:35:10.021 Removing: /var/run/dpdk/spdk_pid193541 00:35:10.021 Removing: /var/run/dpdk/spdk_pid193937 00:35:10.021 Removing: /var/run/dpdk/spdk_pid193960 00:35:10.021 Removing: /var/run/dpdk/spdk_pid194205 00:35:10.021 Removing: /var/run/dpdk/spdk_pid195104 00:35:10.021 Removing: /var/run/dpdk/spdk_pid195899 00:35:10.021 Removing: /var/run/dpdk/spdk_pid201189 00:35:10.021 Removing: /var/run/dpdk/spdk_pid230254 00:35:10.021 Removing: /var/run/dpdk/spdk_pid233210 00:35:10.021 Removing: /var/run/dpdk/spdk_pid234391 00:35:10.021 Removing: /var/run/dpdk/spdk_pid235713 00:35:10.021 Removing: /var/run/dpdk/spdk_pid235854 00:35:10.278 Removing: /var/run/dpdk/spdk_pid235994 00:35:10.278 Removing: /var/run/dpdk/spdk_pid236134 00:35:10.278 Removing: /var/run/dpdk/spdk_pid236604 00:35:10.278 Removing: /var/run/dpdk/spdk_pid237979 00:35:10.278 Removing: /var/run/dpdk/spdk_pid238771 00:35:10.278 Removing: /var/run/dpdk/spdk_pid239203 00:35:10.278 Removing: /var/run/dpdk/spdk_pid240813 00:35:10.278 Removing: /var/run/dpdk/spdk_pid241119 00:35:10.278 Removing: /var/run/dpdk/spdk_pid241683 00:35:10.278 Removing: /var/run/dpdk/spdk_pid244209 00:35:10.278 Removing: /var/run/dpdk/spdk_pid247518 00:35:10.278 Removing: /var/run/dpdk/spdk_pid247519 00:35:10.279 Removing: /var/run/dpdk/spdk_pid247520 00:35:10.279 Removing: /var/run/dpdk/spdk_pid249759 00:35:10.279 Removing: /var/run/dpdk/spdk_pid254769 00:35:10.279 Removing: /var/run/dpdk/spdk_pid257551 00:35:10.279 Removing: /var/run/dpdk/spdk_pid261418 00:35:10.279 Removing: /var/run/dpdk/spdk_pid262768 00:35:10.279 Removing: /var/run/dpdk/spdk_pid263867 00:35:10.279 Removing: /var/run/dpdk/spdk_pid264962 00:35:10.279 Removing: /var/run/dpdk/spdk_pid267735 00:35:10.279 Removing: /var/run/dpdk/spdk_pid270340 00:35:10.279 Removing: /var/run/dpdk/spdk_pid272718 00:35:10.279 Removing: /var/run/dpdk/spdk_pid276981 00:35:10.279 Removing: /var/run/dpdk/spdk_pid276989 00:35:10.279 Removing: /var/run/dpdk/spdk_pid279901 00:35:10.279 Removing: /var/run/dpdk/spdk_pid280037 00:35:10.279 Removing: /var/run/dpdk/spdk_pid280173 00:35:10.279 Removing: /var/run/dpdk/spdk_pid280439 00:35:10.279 Removing: /var/run/dpdk/spdk_pid280566 00:35:10.279 Removing: /var/run/dpdk/spdk_pid283243 00:35:10.279 Removing: /var/run/dpdk/spdk_pid283688 00:35:10.279 Removing: /var/run/dpdk/spdk_pid286370 00:35:10.279 Removing: /var/run/dpdk/spdk_pid288238 00:35:10.279 Removing: /var/run/dpdk/spdk_pid291774 00:35:10.279 Removing: /var/run/dpdk/spdk_pid295150 00:35:10.279 Removing: /var/run/dpdk/spdk_pid302543 00:35:10.279 Removing: /var/run/dpdk/spdk_pid306977 00:35:10.279 Removing: /var/run/dpdk/spdk_pid307040 00:35:10.279 Removing: /var/run/dpdk/spdk_pid320139 00:35:10.279 Removing: /var/run/dpdk/spdk_pid320664 00:35:10.279 Removing: /var/run/dpdk/spdk_pid321081 00:35:10.279 Removing: /var/run/dpdk/spdk_pid321491 00:35:10.279 Removing: /var/run/dpdk/spdk_pid322074 00:35:10.279 Removing: /var/run/dpdk/spdk_pid322586 00:35:10.279 Removing: /var/run/dpdk/spdk_pid323012 00:35:10.279 Removing: /var/run/dpdk/spdk_pid323412 00:35:10.279 Removing: /var/run/dpdk/spdk_pid325937 00:35:10.279 Removing: /var/run/dpdk/spdk_pid326082 00:35:10.279 Removing: /var/run/dpdk/spdk_pid329917 00:35:10.279 Removing: /var/run/dpdk/spdk_pid330068 00:35:10.279 Removing: /var/run/dpdk/spdk_pid333444 00:35:10.279 Removing: /var/run/dpdk/spdk_pid336419 00:35:10.279 Removing: /var/run/dpdk/spdk_pid343560 00:35:10.279 Removing: /var/run/dpdk/spdk_pid344021 00:35:10.279 Removing: /var/run/dpdk/spdk_pid346542 00:35:10.279 Removing: /var/run/dpdk/spdk_pid346702 00:35:10.279 Removing: /var/run/dpdk/spdk_pid349354 00:35:10.279 Removing: /var/run/dpdk/spdk_pid353059 00:35:10.279 Removing: /var/run/dpdk/spdk_pid355226 00:35:10.279 Removing: /var/run/dpdk/spdk_pid361638 00:35:10.279 Removing: /var/run/dpdk/spdk_pid366867 00:35:10.279 Removing: /var/run/dpdk/spdk_pid368049 00:35:10.279 Removing: /var/run/dpdk/spdk_pid368775 00:35:10.279 Removing: /var/run/dpdk/spdk_pid379554 00:35:10.279 Removing: /var/run/dpdk/spdk_pid381824 00:35:10.279 Removing: /var/run/dpdk/spdk_pid383822 00:35:10.279 Removing: /var/run/dpdk/spdk_pid388878 00:35:10.279 Removing: /var/run/dpdk/spdk_pid388892 00:35:10.279 Removing: /var/run/dpdk/spdk_pid391822 00:35:10.279 Removing: /var/run/dpdk/spdk_pid393223 00:35:10.279 Removing: /var/run/dpdk/spdk_pid394616 00:35:10.279 Removing: /var/run/dpdk/spdk_pid395414 00:35:10.279 Removing: /var/run/dpdk/spdk_pid396894 00:35:10.279 Removing: /var/run/dpdk/spdk_pid397762 00:35:10.279 Removing: /var/run/dpdk/spdk_pid403099 00:35:10.279 Removing: /var/run/dpdk/spdk_pid403579 00:35:10.279 Removing: /var/run/dpdk/spdk_pid403973 00:35:10.279 Removing: /var/run/dpdk/spdk_pid406041 00:35:10.279 Removing: /var/run/dpdk/spdk_pid406451 00:35:10.279 Removing: /var/run/dpdk/spdk_pid406806 00:35:10.279 Removing: /var/run/dpdk/spdk_pid409209 00:35:10.279 Removing: /var/run/dpdk/spdk_pid409218 00:35:10.279 Removing: /var/run/dpdk/spdk_pid410809 00:35:10.279 Removing: /var/run/dpdk/spdk_pid411168 00:35:10.279 Removing: /var/run/dpdk/spdk_pid411182 00:35:10.279 Removing: /var/run/dpdk/spdk_pid84843 00:35:10.279 Removing: /var/run/dpdk/spdk_pid85582 00:35:10.279 Removing: /var/run/dpdk/spdk_pid86524 00:35:10.279 Removing: /var/run/dpdk/spdk_pid86854 00:35:10.279 Removing: /var/run/dpdk/spdk_pid87547 00:35:10.279 Removing: /var/run/dpdk/spdk_pid87688 00:35:10.279 Removing: /var/run/dpdk/spdk_pid88396 00:35:10.279 Removing: /var/run/dpdk/spdk_pid88532 00:35:10.279 Removing: /var/run/dpdk/spdk_pid88790 00:35:10.279 Removing: /var/run/dpdk/spdk_pid89993 00:35:10.279 Removing: /var/run/dpdk/spdk_pid90910 00:35:10.279 Removing: /var/run/dpdk/spdk_pid91231 00:35:10.279 Removing: /var/run/dpdk/spdk_pid91432 00:35:10.279 Removing: /var/run/dpdk/spdk_pid91640 00:35:10.279 Removing: /var/run/dpdk/spdk_pid91865 00:35:10.279 Removing: /var/run/dpdk/spdk_pid92118 00:35:10.279 Removing: /var/run/dpdk/spdk_pid92276 00:35:10.279 Removing: /var/run/dpdk/spdk_pid92464 00:35:10.279 Removing: /var/run/dpdk/spdk_pid92775 00:35:10.279 Removing: /var/run/dpdk/spdk_pid95262 00:35:10.279 Removing: /var/run/dpdk/spdk_pid95430 00:35:10.279 Removing: /var/run/dpdk/spdk_pid95592 00:35:10.279 Removing: /var/run/dpdk/spdk_pid95599 00:35:10.279 Removing: /var/run/dpdk/spdk_pid95962 00:35:10.279 Removing: /var/run/dpdk/spdk_pid96029 00:35:10.279 Removing: /var/run/dpdk/spdk_pid96345 00:35:10.279 Removing: /var/run/dpdk/spdk_pid96469 00:35:10.279 Removing: /var/run/dpdk/spdk_pid96641 00:35:10.279 Removing: /var/run/dpdk/spdk_pid96769 00:35:10.279 Removing: /var/run/dpdk/spdk_pid96931 00:35:10.279 Removing: /var/run/dpdk/spdk_pid96952 00:35:10.279 Removing: /var/run/dpdk/spdk_pid97438 00:35:10.279 Removing: /var/run/dpdk/spdk_pid97601 00:35:10.279 Removing: /var/run/dpdk/spdk_pid97807 00:35:10.279 Clean 00:35:10.536 19:33:55 -- common/autotest_common.sh@1453 -- # return 0 00:35:10.536 19:33:55 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:10.536 19:33:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:10.536 19:33:55 -- common/autotest_common.sh@10 -- # set +x 00:35:10.536 19:33:55 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:10.536 19:33:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:10.536 19:33:55 -- common/autotest_common.sh@10 -- # set +x 00:35:10.536 19:33:55 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:10.536 19:33:55 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:10.536 19:33:55 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:10.536 19:33:55 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:10.536 19:33:55 -- spdk/autotest.sh@398 -- # hostname 00:35:10.536 19:33:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:10.794 geninfo: WARNING: invalid characters removed from testname! 00:35:42.870 19:34:26 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:45.520 19:34:30 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:48.815 19:34:33 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:51.349 19:34:36 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:54.646 19:34:39 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:57.942 19:34:42 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:00.470 19:34:45 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:00.470 19:34:45 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:00.470 19:34:45 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:00.470 19:34:45 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:00.470 19:34:45 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:00.470 19:34:45 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:00.470 + [[ -n 13635 ]] 00:36:00.470 + sudo kill 13635 00:36:00.478 [Pipeline] } 00:36:00.491 [Pipeline] // stage 00:36:00.496 [Pipeline] } 00:36:00.508 [Pipeline] // timeout 00:36:00.513 [Pipeline] } 00:36:00.525 [Pipeline] // catchError 00:36:00.529 [Pipeline] } 00:36:00.542 [Pipeline] // wrap 00:36:00.547 [Pipeline] } 00:36:00.558 [Pipeline] // catchError 00:36:00.566 [Pipeline] stage 00:36:00.568 [Pipeline] { (Epilogue) 00:36:00.578 [Pipeline] catchError 00:36:00.579 [Pipeline] { 00:36:00.590 [Pipeline] echo 00:36:00.591 Cleanup processes 00:36:00.596 [Pipeline] sh 00:36:00.881 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:00.881 421856 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:00.894 [Pipeline] sh 00:36:01.181 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:01.181 ++ grep -v 'sudo pgrep' 00:36:01.181 ++ awk '{print $1}' 00:36:01.181 + sudo kill -9 00:36:01.181 + true 00:36:01.193 [Pipeline] sh 00:36:01.479 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:11.458 [Pipeline] sh 00:36:11.745 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:11.745 Artifacts sizes are good 00:36:11.758 [Pipeline] archiveArtifacts 00:36:11.765 Archiving artifacts 00:36:12.233 [Pipeline] sh 00:36:12.519 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:12.534 [Pipeline] cleanWs 00:36:12.543 [WS-CLEANUP] Deleting project workspace... 00:36:12.544 [WS-CLEANUP] Deferred wipeout is used... 00:36:12.550 [WS-CLEANUP] done 00:36:12.552 [Pipeline] } 00:36:12.568 [Pipeline] // catchError 00:36:12.578 [Pipeline] sh 00:36:12.861 + logger -p user.info -t JENKINS-CI 00:36:12.869 [Pipeline] } 00:36:12.884 [Pipeline] // stage 00:36:12.890 [Pipeline] } 00:36:12.904 [Pipeline] // node 00:36:12.910 [Pipeline] End of Pipeline 00:36:12.944 Finished: SUCCESS